Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

MIT Artificial Vision Researchers Assemble 16-GPU Machine

timothy posted more than 6 years ago | from the many-many-little-dots dept.

Graphics 121

lindik writes "As part of their research efforts aimed at building real-time human-level artificial vision systems inspired by the brain, MIT graduate student Nicolas Pinto and principal investigators David Cox (Rowland Institute at Harvard) and James DiCarlo (McGovern Institute for Brain Research at MIT) recently assembled an impressive 16-GPU 'monster' composed of 8x9800gx2s donated by NVIDIA. The high-throughput method they promote can also use other ubiquitous technologies like IBM's Cell Broadband Engine processor (included in Sony's Playstation 3) or Amazon's Elastic Cloud Computing services. Interestingly, the team is also involved in the PetaVision project on the Roadrunner, the world's fastest supercomputer."

cancel ×

121 comments

Sorry! There are no comments related to the filter you selected.

Just to get it out of the way... (4, Funny)

im_thatoneguy (819432) | more than 6 years ago | (#24355963)

"But can it run Crysis?"

*Ducks*

Re:Just to get it out of the way... (1)

TheLink (130905) | more than 6 years ago | (#24355989)

Of course not.

Re:Just to get it out of the way... (3, Funny)

dtml-try MyNick (453562) | more than 6 years ago | (#24356021)

The graphic cards might be able to handle it. But you'd have to run Vista in order to get maximized settings..
Now there is a problem.

Re:Just to get it out of the way... (1, Informative)

Anonymous Coward | more than 6 years ago | (#24356143)

There is hardly a difference [gamespot.com] between Crysis under DX9 and DX10. DX10 "features" are a Microsoft scam to promote Vista, nothing more.

So yes, you can maximise the detail levels on XP.

Re:Just to get it out of the way... (1)

killmofasta (460565) | more than 6 years ago | (#24356429)

I dont see that at all. There is at least in the second shot a increable diffrence in the mid-foregroud detail. The second shot shows it off the best, and the backround is really 3D looking, wheras, the other shots look like its a bollywood set. Im loading and stripping (vlite) Vista next weekend, so Ill have a look at DX10, as well as hacking DX10 to work under vista.

Re:Just to get it out of the way... (1)

IllForgetMyNickSoonA (748496) | more than 6 years ago | (#24357811)

Hmm... actually, some screenshots show even MORE details on the XP DX9 version than on the Vista DX10. Most screenshots look practically identical. Actually, very disappointing.

I'd call it a tie.

DX10 vs DX9 (4, Informative)

DrYak (748999) | more than 6 years ago | (#24356603)

There are 2 main differences between DX9 and DX10 :

I - The shaders offered by the two APIs are different (shader model 3 vs 4). None of the DX9 screen shot does self-shading. This is specially visible on the rocks (but even in action on the plancks of the fences). So there *are* available under Vista additional subtleties

II - The driver architecture is much more complex in Vista, because it is built to enable cooperation between several separate processes all using the graphics at the same time. Even if Vista automatically disables Aero when games are running full-screen (and thus the game is the only process accessing the graphic card), the additional layers of abstraction have an impact on performance. It is specially visible at low quality settings where the software overhead is more noticeable.

Re:Just to get it out of the way... (1)

fostware (551290) | more than 6 years ago | (#24356445)

It'll have to be Vista 64-bit...

Under Vista 32-bit you're left with only 640K after removing the video memory allocations.

Us normal SLi users win with only 3.2GB left :P

Re:Just to get it out of the way... (1)

oodaloop (1229816) | more than 6 years ago | (#24356071)

Yeah, and imagine a beowulf cluster of those!

I'm sorry, that was uncalled for.

Re:Just to get it out of the way... (1)

killmofasta (460565) | more than 6 years ago | (#24356409)

Isn't it a beowulf cluster already? It was FREE as in beer...( Probibly gave them every one they had! ) Now they can run Folding@home!

Re:Just to get it out of the way... (1)

killmofasta (460565) | more than 6 years ago | (#24356511)

What an ICREADBLE BOX. 15 fans on the front and sides. It must sound like a 747/MacProG5. Nice GPUs though...

Re:Just to get it out of the way... (1)

beelsebob (529313) | more than 6 years ago | (#24356599)

I assume you men a PowerMac G5, because the MacPros are pretty much silent.

Was wondering about that too (1)

spineboy (22918) | more than 6 years ago | (#24357145)

So it's about one fan per GPU? Seems annoying and inefficient. Why not build it more spread apart, or use a "Central Air" system like people use in their homes.

Not using water cooling I understand, 'cause there'd be around 30 tubes snaking in and out of the box - something would fail/leak.

Re:Just to get it out of the way... (1)

nacturation (646836) | more than 6 years ago | (#24356145)

This is just what 3D Realms has been waiting for... it's almost powerful enough to run Duke Nukem Forever!
 

Re:Just to get it out of the way... (2, Informative)

kaizokuace (1082079) | more than 6 years ago | (#24356457)

no it isn't. Duke Nukem Forever will be released when a powerful enough computer is assembled. The game will just manifest itself in the machine one powered up. But you have to have downloaded 20TB of porn and covered the internals with a thin layer of cigar smoke first.

Fascinating (5, Interesting)

AlienIntelligence (1184493) | more than 6 years ago | (#24356391)

I think this part of the computing timeline is going to be
one that is well remembered. I know I find it fascinating.

This is a classic moment when tech takes the branch that
was unexpected. GPGPU computing [gpgpu.org] will soon
reach ubiquity but for right now it's the fledgling that is being
grown in the wild.

Of course I'm not earmarking this one particular project
as the start point but this year has gotten 'GPU this' and
'GPGPU that' start up events all over it. Some even said
in 2007, that it would be a buzzword in 08 [theinquirer.net] .

And of course there's nothing like new tech to bring out [intel.com]
a naysayer.

Folding@home [stanford.edu] released their second generation [stanford.edu]
GPU client in April 08. While retiring the GPU1 core in
June of this year.

I know I enjoy throwing spare GPU cycles to a distributed
cause and whenever I catch sight of the icon for the GPU [stanford.edu]
client it brings the back the nostalgia of distributed clients [wikipedia.org]
of the past. [Near the bottom].

I think I was with United Devices [wikipedia.org] the longest.
And the Grid [grid.org] .

Now we are getting a chance to see GPU supercomputing
installations from IBM [eurekalert.org] and this one from MIT.
Soon those will be littering the Top 500 list [top500.org] .

I also look forward most to the peaceful endeavors the new
processing power will be used for... weather analysis [unisys.com] ,
drug creation [wikipedia.org] , and disease studies [medicalnewstoday.com] .

Oh yes, I realize places like the infamous Sandia will be using
the GPU to rev up atom splitting. But maybe if they keep their
bombs IN the GPU it'll lessen the chances of seeing rampant
proliferation again.

Ok, well enough of my musings over a GPU.

-AI

Re:Fascinating (1)

TheRaven64 (641858) | more than 6 years ago | (#24356507)

Huh? GPGPU was a buzzword in 2005 (at least, that's when I first saw large numbers of books about the subject appearing). Now it's pretty much expected - GPUs are the cheapest stream-vector processors on the market at the moment.

Re:Fascinating (2, Insightful)

blahplusplus (757119) | more than 6 years ago | (#24357205)

"I think this part of the computing timeline is going to be one that is well remembered. I know I find it fascinating."

Well remembered? Perhaps... but I wouldn't sing their praises just yet. Advances in memory are critically necessary to keep the pace of computational speed up. The big elephants in the room are: Heat, memory bandwidth and latency. Part of the reason the GPU's this time round were not as impressive is because of increasing memory bandwidth linearly will start not have the same effects sooner or later, there is a point at which the geometry of information will come into play and the law of diminishing returns on a kind of architecture or memory will take place for a price that people can afford to pay.

Next up, 32-bit addressing is starting to be a real pain in the ass. The move to 64-bit operating systems is critical if we are to expect GPU's to keep increasing their memory (1GB+ of local memory on a card now).

Supreme commander was one of the few games to hit the 4GB addressing limit and more and more games will definitely do so in the future. I know you were talking about other areas of computing, but without the games market, I don't see any serious reason for any regular person to upgrade their computers video at all. The many who donate to distributed computing did so as an afterhtought, not as the main reason they bought the card... As for the wider non-gaming market we'll have to see whether not GPU computing is going to be moer widely adopted.

Lastly, let us not forget that one of the primary reasons GPU computing is so fast is memory bandwidth, delay's in better memory technology will have big impacts on GPU performacne. As we've seen with this generation of GPU's, Nvidia's lack of DDR5 and a smaller process for the GT280 hurt them a lot.

Re:Fascinating (0)

Anonymous Coward | more than 6 years ago | (#24359891)

It seems to me the conclusions that ATI's new cards kill the gt200s is based on graphics performance benchmarks. I wouldn't dispute that. But it doesn't necessarily analogize to GPGPU tasks. Nvidia added douple-precision support, doubled the register counts per SP, and added some various flexibilities to the model. I'm not a graphics wiz, but I know at least that DP is fairly useless for gfx.

Further, I'm not sure memory latency will be a big problem so long as bandwidth continues to increase. Latency is only an issue when you have to read what you just wrote, or when you can't find anything else to do while waiting for a read/write. Both are solved by a parallel algorithm.

64 bit addressing would be nice, but as long as it's on the PC, it may not be a big deal. Or in other words, as long as the data is in memory rather than HDD somewhere.. you are relatively ok. You can page in 32-bit addressable chunks to the cards pretty quickly (over 1GB/s transfer rate).

Re:Fascinating? (1)

Caelius (1282378) | more than 6 years ago | (#24357883)

I think it's great to see that we can finally start using GPUs to do things beyond gaming, but I also don't see it as the Great Second Coming of high-speed computing. GPUs are designed to tackle only one kind of problem, and a highly parallel problem at that. If you are a researcher and you can see huge gains in performance by using GPUs, then great! But GPUs are hardly general purpose, and will simply not address most of our computing needs. I see the rise of GPUs as similiar to computing in the 60's(?). Figure out what kind of software you need to run, and then design a computing platform around it. If you need to perform small operations on a highly parallel data structure, then a GPU cluster is an excellent way to go. -One beginning computer architect's opinion

Re:Just to get it out of the way... (0)

Anonymous Coward | more than 6 years ago | (#24356453)

i always wanted to mod a first post redundant. thank you.

Say no to proprietary NVIDIA hardware (0, Troll)

Anonymous Coward | more than 6 years ago | (#24356019)

AMD/ATI have released the specs for their hardware. Why haven't the proprietary NVIDIA engineers done the same? What do they have to hide?

Re:Say no to proprietary NVIDIA hardware (0, Redundant)

Anonymous Coward | more than 6 years ago | (#24356065)

Profit?

Re:Say no to proprietary NVIDIA hardware (1, Interesting)

ya really (1257084) | more than 6 years ago | (#24356095)

AMD/ATI have released the specs for their hardware. Why haven't the proprietary NVIDIA engineers done the same? What do they have to hide?

In terms of actually being totally non-proprietary, Nvidia has to worry about ATI stealing their drivers (which they would or at least "borrow" alot from them), since Nvidia generally has that as their trump card over ATI no matter who has the better hardware. On the other hand, Nvidia has no interest in "borrowing" from ATI's drivers. ATI knows that, and that's why their drivers are open. Yes, it may suck for wanting to run anything multimedia, graphical or gaming wise on Linux if you have Nvidia card (I have an 8800gt and I feel the pain at times on KDE), but in this case, I think Nvidia's rationale for not giving up their specs is reasonable. Now, if they only cared more about their drivers for Linux, proprietary or not.

Re:Say no to proprietary NVIDIA hardware (0, Flamebait)

Splab (574204) | more than 6 years ago | (#24356139)

Nice trolling, a bit of linkies perhaps to back your statements?

Re:Say no to proprietary NVIDIA hardware (4, Informative)

ya really (1257084) | more than 6 years ago | (#24356201)

Tom's Hardware [tomshardware.com] did a pretty good job detailing the ups and downs of ATI and Nvidia with many of the major games of last year (BioShock, World in Conflict, etc). Overall, both companies faired well, but they reported quite a few crashes due to the ATI drivers. I've had an ATI card before, the 9800xt when Nvidia was producing their horrible 5xxx series back in 2003-04 that was totally worthless. The 9800xt was a good card for everything (gaming, graphical aps, etc). Sorry, I should have cited sources. Wasn't trolling on purpose, though I know that writing anything positive about Nvidia on slashdot is borderline blasphemy.

Re:Say no to proprietary NVIDIA hardware (1)

Splab (574204) | more than 6 years ago | (#24356237)

You are claiming ATI will outright steal from Nvidia, whether one driver is better than the other doesn't matter, I want you to back up your claim that they would do something like that.

Re:Say no to proprietary NVIDIA hardware (1)

ya really (1257084) | more than 6 years ago | (#24356329)

You are claiming ATI will outright steal from Nvidia, whether one driver is better than the other doesn't matter, I want you to back up your claim that they would do something like that.

Would you like me to call up ATI and ask them?

ATI Customer Service: What can I help you with today.
Me: If Nvidia made their drivers OSS, would you borrow from them?
ATI Customer Service: I'm sorry sir, we cannot answer that at this time. Is there anything else I can help you with?
Me: Nope, thanks.

If someone makes a better product and the product's is available for all to see, then most likely, similar products from competitors will borrow from the concept. Especially when it's legally allowed depending on the open source license. If nvidia published their drivers and going on the fact they are generally (I said generally in my previous post as well, so please do not pidgen hole me) better than ATI's, what would stop ATI from borrowing ideas or code from them other than scout's honor or just stubborness? I am not debating the ethics of it, since how is it wrong if it's not stealing? Aside from that, they could if they wanted without recourse, as I stated depending on what license Nvidia would choose for their open source drivers.

Re:Say no to proprietary NVIDIA hardware (1)

ya really (1257084) | more than 6 years ago | (#24356349)

Also, isnt the concept of opensource to share information to better the overall technology? If nvidia feels that giving out their driver code will give ATI better video cards than their own, it would be insane for Nvidia to release them (which implies that ATI cards are more or less hardware equivilant to Nvidia). It may improve the overall tech, but only in favor of ATI (assuming ATI's own drivers do not improve from their own advancements not related to what they could potentionally gain from nvidia's). Hey, maybe there is a chance that Nvidia opening their drivers will cause ATI to go through some sort of breakthrough in drivers themselves, thus improving both their own drivers and nvidia's (if nvidia chooses to impliment it). Giving away the key to your company's flagship product would be economic suicide, the equivlant to Windows giving up its source code to XP, Vista and Office (as much as anyone here would like Windows to go opensource). On a final note, Nvidia probably has put too much money and time into their drivers to give them up, whereas ATI mostly invested more money into R & D of their hardware.

Re:Say no to proprietary NVIDIA hardware (0)

Anonymous Coward | more than 6 years ago | (#24356883)

"the equivlant to Windows giving up its source code to XP, Vista and Office (as much as anyone here would like Windows to go opensource)"

That won't happen any time soon - releasing 'their' source code would expose them to dual embarassments:

1) Showing how much co-opted BSD code is in there

2) Showing how they managed to 'improve' on said BSD base with all their 'innovations' to produce the egregious POS we all know and hate. ...and possibly

3) Demonstrate how much obfuscation has gone into trying to disguise Winbloze's BSD (semi-illegitimate) parentage.

Keep drinkin' the KoolAid, MSFanBoyz....

Re:Say no to proprietary NVIDIA hardware (0)

Anonymous Coward | more than 6 years ago | (#24356923)

Oh yeah, almost forgot...

Ditto all the above, substitute VMS for BSD.

Re:Say no to proprietary NVIDIA hardware (1)

Splab (574204) | more than 6 years ago | (#24356359)

So I was right, you are trolling, too bad the mods can't see that.

Re:Say no to proprietary NVIDIA hardware (0)

Anonymous Coward | more than 6 years ago | (#24356769)

No, he's noting that, in general, the point of open-sourcing things is so that others can benefit from seeing how they work, and use the code in their own projects.
Since the only people to benefit from using the code in nVidia's drivers would be ATI (and, presumably, Intel), it is logical to assume that ATI might be interested in doing so.
It's not trolling to make a logical statement based on known facts, especially if it doesn't actually claim to defame anyone.

Re:Say no to proprietary NVIDIA hardware (1, Insightful)

Anonymous Coward | more than 6 years ago | (#24356399)

We should PAY ATI to use nVidia's Drivers. I learned this on the Radeon 9800s. Solid Well performing card fairly good 3D Perfornce. Drivers utter and complete garbage. Used more memory, cause random crashes. I had to reinstall XP, after I sold the card, ( and after I had re-installed XP twice before to fix the 'feature' ) to get rid of .Net 2.0. Got a GeForce 4ti to replace. Was able to put a fan right over the GPU. Computer went to MONTHS without crashing, No more blue screens. (AMD 1.6 Ghz dual). If I ever see the words 'catalyst' its really 'crap_is_this_sys'

I now have a pair of nVidia 7600GTs on a crossfire motherboard. ( yea, it sould have ATIs on it ), but with the driver hack, I can play a whole weekend no problems. I cant seem to remember this new box (AMD 2.0Ghz) ever requiring rebooting, except for Windows Updates. I basically bought the rig to play FarCry, and its great on FarCry. I cant wait to get Crysis.

Re:Say no to proprietary NVIDIA hardware (4, Informative)

TheRaven64 (641858) | more than 6 years ago | (#24356561)

A video card driver typically has three major components:
  • The parts specific to the windowing system (including context switching / multiplexing).
  • The parts specific to the 3D API.
  • The parts specific to the hardware.

ATi could conceivably steal parts from the first two from nVidia, but it's doubtful that they could steal anything from the last part since their hardware designs are sufficiently different to make this hard.

The problem nVidia are going to have is that the new Gallium architecture means that the first two parts are abstracted away and reusable, as is the fall-back path (which emulates functionality any specific GPU might be missing). This means that Intel and AMD both get to benefit from the other company (and random hippyware developers and other GPU manufacturers / users) improving the generic components, while nVidia are stuck developing their own entire alternative to DRI, DRM, Gallium, and Mesa. The upshot is that Intel and AMD can spend a tiny fraction of the time (and, thus, money) developing drivers that nVidia do. In the long run, this means either smaller profits or more expensive cards for nVidia, more bugs in nVidia drivers (since they don't have the same real-world coverage testing).

Now, if you're talking just about specs, then you're just plain trolling. Intel doesn't lose anything to AMD by releasing the specs for the Core 2 in a 3000 page PDF, because the specs just give you the input-output semantics, they don't give you any implementation details. Anyone with a little bit of VLSI experience could make an x86 chip, but making one that gives good performance and good performance-per-Watt is a lot harder. Similarly, the specs for an nVidia card would let anyone make a clone, but they'd have to spend a lot of time and effort optimising their design to get anywhere close to the performance that nVidia get.

Re:Say no to proprietary NVIDIA hardware (0)

Anonymous Coward | more than 6 years ago | (#24358487)

The flip side of this is that prior to TTM/GEM merging into DRI (which still hasn't occured FTR), all the drivers using the "open" graphics stack couldn't implement allocated items like pixel-buffer objects and frame-buffer objects. The only way to get that was from the closed all-in-one stack that ATi and nVidia offer.

Now that the central "open" stack is improving we'll see big improvements in the open drivers almost for free, but before this point Intel, etc, just couldn't be feature complete. You're a slave to the architecture given to you.

Re:Say no to proprietary NVIDIA hardware (0)

Anonymous Coward | more than 6 years ago | (#24359421)

random hippyware developers

I lolled.

Not about souce, but about *Technical Specs*. (1)

DrYak (748999) | more than 6 years ago | (#24356959)

AMD/ATI have released the specs for their hardware. Why haven't the proprietary NVIDIA engineers done the same?

Nvidia has to worry about ATI stealing their drivers {...} ATI knows that, and that's why their drivers are open.

We are not speaking about releasing source code of current drivers. In fact ATI/AMD's fglrx *IS NOT* open. At all. What is open are 2 *separate* drivers projects, which are done using the *technical data* released by AMD.

You're confusing the situation with Intel. (They paid Thungsten Graphics to write an open source drivers for i8xx/i9xx to begin with. There's no such thing as a proprietary intel drive on linux. Only an opensource driver written by TG)

What we want is not nVidia releasing the source of their drivers. What we want is nVidia providing enough technical data to the Nouveau project, so they can develop alternative open source drivers.

I think Nvidia's rationale for not giving up their specs is reasonable. Now, if they only cared more about their drivers for Linux, proprietary or not.

Their main rationale isn't about ATI peeking inside their spec. After all, what the open source projects are asking for (and obtaining from AMD and finally from VIA, but still not from nVidia) is not the *code*, but the *specs*. And the specs are only a description of how to interface the hardware. As both Radeon HDs and GeFroce use radically different designs, specs won't help much beyond have a little better idea of what the other hardware is doing under the hood. But there's no way the knowledge of which hardware registers does what that will help stealing driver code.
It fact that won't help at all because ATI and most open source projects are using Linux&BSD's standart Xorg + DRI stack, whereas nVidia use their own structure to handle 3D.

The most probable reason for not releasing the hardware specs, is that modern GPU hardware is horribly complicated. With a big number of teams working on an insane amount of components. Significant portions of technology going into a graphic cards might have been subcontracted or might be licensed from 3rd party providers.

It can be a real legal nightmare to track all the license dependency to clear a release. For nVidia which has covered a pretty big chunk of the market (Windows and some Linux running proprietary drivers), going through all legal hoops just to please a last few corner cases (linux running non-x86 hardware, other opensource systems, etc...) which only bring tiny fractions of market simply isn't worth at all.

This complexity can also be seen in AMD/ATI release scheme :
- They release the specs one piece at a time, with long delay because of the necessary check with the legal department to obtain clearance. (Sometimes criticized on irc channels for the slowness of the process)
- Some specs won't be releasable at all. In current generations of GPUs, the video decompression acceleration is intricate with the HDCP encryption. And the licensing terms of the later prohibits the release of the specs of the current video+hdcp unit. This was mentioned in an interview on /. a couple of months ago. They plan to use a different design on future product, having clearly separate video and hdcp units so releasing the specs of the former won't violate the licensing terms of the later. (They've also told in other places that they will move to more open-source friendly designs overall).

The situation is somewhat different with VIA as :
- Their chips are rather small projects combining less technologies and mostly done in house. The licensing complexity is smaller.
- They don't license 3rd party technologies, but instead count on OEMs and system integrators to license what they needs themselves. This may poses problem for Windows development (it was recently mentioned on /. that their chips have video decompression unit, but doesn't provide any software middleware in their driver to enable using it - set-top box manufacturer have to license their own from somewhere else). But it's a big advantage for Linux because there aren't "sorry our licensor prohibits us from divulging this kind of information" situation, because there aren't any 3rd party licenses to begin with.

Intel's also slightly different :
- Tungsten Graphics was paid to write open source drivers to begin with, so there aren't problems for open the source later.
- Most of the technology used in their chips are developed in-house. (thus slower development times on things like Larrabee :-P )
- The rest of the technologies are open to begin with (3Dfx's FXT1 compression supported on Intel i8xx/i9xx) so no licensing problems either.

Re:Not about souce, but about *Technical Specs*. (1)

jelle (14827) | more than 6 years ago | (#24359679)

Last December, AMD hired Alex Deucher to work on xorg for them:

His blog here [botchco.com]

If you don't know who Alex Deucher is, just Google [google.com] his name.

Re:Say no to proprietary NVIDIA hardware (3, Insightful)

cheater512 (783349) | more than 6 years ago | (#24356137)

I want to support ATI and AMD, but nVidia just works.
Their drivers are very nice.

Until that changes I'm a nVidia guy.
A third party open source driver should fix the problems.

Re:Say no to proprietary NVIDIA hardware (1, Informative)

Anonymous Coward | more than 6 years ago | (#24356279)

I upgraded my X800XL to a 8800GT. With Windows, I never had a problem with my X800XL and I still have not see a problem with the 8800GT. The X800XL just worked and the 8800GT just works.

With Ubuntu, the X800XL was working nicely (open source drivers) and the 8800GT is a piece of crap. NVidia's drivers are horribly slow and a lot of users are reporting the same thing. I have an old computer with an even older GeForce 4 MX and it displays things faster.

Before I bought my 8800GT I didn't care much about one company or the other, but unless NVidia can release something that works well, I guess I am pro ATI for now.

Re:Say no to proprietary NVIDIA hardware (1)

cheater512 (783349) | more than 6 years ago | (#24356513)

Erm are you using the open source drivers?
No 3d acceleration.

Use the nVidia drivers which are nearly identical to the Windows ones.

Re:Say no to proprietary NVIDIA hardware (1)

Antique Geekmeister (740220) | more than 6 years ago | (#24356433)

Maybe they work for you: I find NVidia drivers quite painful, especially for non-Windows operating sytems. And a 'third party open source driver'can't get the details of the NVidia API to work from, which means a huge amount of reverse engineering, especially of their propriatary OpenGL libraries, which are at the core of their enhanced features in non-Windows operating systems.

Re:Say no to proprietary NVIDIA hardware (1)

walshy007 (906710) | more than 6 years ago | (#24356509)

"I find NVidia drivers quite painful, especially for non-Windows operating sytems."

wait, so your telling me you have troubles with the windows drivers too? it's a single download for the platform your on and next next done.
Granted, the linux ones have a couple more steps than that, but it's still rather trivial for most people, considering it's the most frequently used driver for 3d on linux (besides possibly intel).

Re:Say no to proprietary NVIDIA hardware (1)

Antique Geekmeister (740220) | more than 6 years ago | (#24357635)

I've had to clean up when someone trying to fix their PC and driver problems went and re-installed drivers from their media, when I'd I'd updated from NVidia's site, and monitors become completely unavailable on dual-display cards from the previous working display, and had it impossible to fix without dragging another monitor in with the other connector type and fixing events from the other display. It's compounded on systems with built-in displays and add-on graphics cards.

So yes, I've had real problems with NVidia's Windows drivers. And the Linux installer problems are compounded by NVidia's refusal to allow them to be published as part of an OS release and attempt to force the manual installation. This breaks package management of the OpenGL libraries for the operating system.

Re:Say no to proprietary NVIDIA hardware (1)

harlows_monkeys (106428) | more than 6 years ago | (#24356417)

Maybe the don't want people writing crappy drivers [blogspot.com] for NVIDIA cards?

It's coming (1, Funny)

Anonymous Coward | more than 6 years ago | (#24356037)

The day when self modification/upgrade enthusiasts start overclocking themselves and bragging about how many fps their eyes get watching the superbowl.

Re:It's coming (1)

Alwin Henseler (640539) | more than 6 years ago | (#24356111)

Well, for the time being I prefer to tinker with things outside my body, thank you.

Re:It's coming (2, Funny)

Anonymous Coward | more than 6 years ago | (#24356189)

Jesus doesn't approve of you doing that.

Re:It's coming (1)

killmofasta (460565) | more than 6 years ago | (#24356437)

Been threre done that. Volcano beans, fresh ground in a expresso machine.

Damn it, I just wet myself!! (1)

notanatheist (581086) | more than 6 years ago | (#24356075)

I noticed they didn't have 8GB of RAM though. Very sad.

So that's what happens (4, Funny)

chabotc (22496) | more than 6 years ago | (#24356081)

When gamers grow up and go to college.. blue leds and bling in the server room!

Re:So that's what happens (0)

Anonymous Coward | more than 6 years ago | (#24356799)

I really think all the "bling" detracts from the legitimacy of their work. It looks like something I would've built if I was 15 and just got a hold of MIT's credit card. No class.

Alright! (3, Funny)

religious freak (1005821) | more than 6 years ago | (#24356097)

One more step to the last invention man ever need make... hooker bot. (mine would be a Buffy Bot, but that's just personal preference)

Re:Alright! (1)

MichaelSmith (789609) | more than 6 years ago | (#24356205)

One more step to the last invention man ever need make... hooker bot. (mine would be a Buffy Bot, but that's just personal preference)

She would have to be cooled with liquid nitrogen, running all those GPUs.

Re:Alright! (1)

Yoozer (1055188) | more than 6 years ago | (#24356253)

It doesn't even need to be able to play blackjack!

Re:Alright! (1)

Ruie (30480) | more than 6 years ago | (#24356403)

One more step to the last invention man ever need make... hooker bot. (mine would be a Buffy Bot, but that's just personal preference)

Here you go: one robotic buffing cell [intec-italy.com]

Re:Alright! (0)

Anonymous Coward | more than 6 years ago | (#24356685)

One Willow-bot, please.

Re:Alright! (1)

DarkOx (621550) | more than 6 years ago | (#24356765)

I know this is slashdot so you have probably not spent much time around females at leat not those of our species, but let me tell you an angry one is a dangerous creature. All of them do get pissed off some of the time. You can be the greatest guy ever and sooner or later you will make a mistake. The good news if you are a good guy they will forgive you but the period between your screw up and their forgiveness can be extreemly hazardous.

Buffy is a fun show and all but if I were ordering robo girl, I am not sure I would spec out one with more addittude then usual, super strength, and a strong preference for solving her problems by hitting.

when thought fails... (0)

Anonymous Coward | more than 6 years ago | (#24356125)

...throw more computing power at it.

I miss '60s-'70s AI research.

Oh, and get off my lawn.

Computing power is how nature does it (1)

mangu (126918) | more than 6 years ago | (#24356825)

Do you know the human brain has about 100 billion neurons? Each neuron can be represented as a weighted average of its inputs, a typical human neuron has some 1000 inputs and does around a hundred operations per second.

So, yes, *maybe* there could be some very smart algorithm that mimics human reasoning, but that's not how it's done in the human brain. It's raw computing power all the way.

Just how specialized is GPU hardware? (5, Interesting)

MR.Mic (937158) | more than 6 years ago | (#24356129)

I keep seeing all these articles about bringing more types of processing applications to the gpu, since it handles floating point math and parallel problems better. I only have a rudimentary understanding of programming compared to most people on this site, so the following may sound like a dumb question. But how do you determine what types of problems will perform well (or are even possible to be solved) through the use of GPUs, and just how "general purpose" can you get on such specialized hardware?

Thanks in advance.

Re:Just how specialized is GPU hardware? (1)

cheater512 (783349) | more than 6 years ago | (#24356155)

Its just a matter of transforming the data in to a format the GPU can handle efficiently.

Re:Just how specialized is GPU hardware? (4, Interesting)

hansraj (458504) | more than 6 years ago | (#24356251)

Not really. Not every problem gains from a gpu.

As a rule of thumb, if you problem requires solving many instances of one simple subproblem which are independent of each other then a gpu helps. A gpu is like a cpu with many many cores where each cpu is not as general purpose as your intel, rather each core is optimized for some solving small problem (without optimizing for frequent load/store/switching operations etc that a general cpu can handle quite well).

So if you see an easy parallelization of your problem, you might think of using a gpu. There are problems that are believed to not be efficiently parallelizable (Linear Programming is one such problem). Also, even if your problem can be easily made parallel it might be tricky to benefit from a gpu as each subroutines might be too complex.

I don't program but my guess would be that if you can see the solution to your problem consisting of a few lines of codes running on many processors and gaining anything, a gpu might be the way to go.

Perhaps someone can explain it better.

Re:Just how specialized is GPU hardware? (2, Informative)

TapeCutter (624760) | more than 6 years ago | (#24357195)

I think you did a good job explaining, one point thought. The sub-problems need not be independent.

Many problems such as weather prediction use finite element analysis with a "clock tick" to syncronise the results of the sub-problems. The sub-problems themselves are cubes representing X cubic kilometers of the atmosphere/surface, each sub-problem depends on the state of it's immediate neighbours. The accuracy of the results depends on the resolution of the clock tick, the volume represented by the sub-problems and the accuracy of the initial conditions. This is usefull in all sorts of simulations from designing molds to minmise air pockets that can plague the process of metal casting, to shooting Cassini through the rings of Saturn, twice!

The technique can be thought of as brute force integration with respect to time, space, matter, energy, etc, for a wide range of physical problems. The more power and raw data you throw at these types of problems the more realistic the "physics" in both video games and scientific simulations. IMHO we have only just scratched the surface of what computers can tell us about the real world through these types of simulations and much of this is due to scientists in many fields confusing "computer simulation" with "artists impression".

BTW climate and weather modelling use the same sort of algorithm but get very different results because weather != claimte.

Re:Just how specialized is GPU hardware? (2, Informative)

TheLink (130905) | more than 6 years ago | (#24356263)

You also need to make sure the I/O to/fro the GPU is good enough.

No point being able to do calculations really fast but not be able to get the results or keep feeding the GPU with data.

I think not too long ago graphics cards were fast, but after you added the problem of getting calculation results back, it wasn't really worth it.

Re:Just how specialized is GPU hardware? (1)

Mattsson (105422) | more than 6 years ago | (#24356719)

That was due to the asymmetric design of AGP.
PCI-Express is symmetric, so it doesn't have this limitation.

Re:Just how specialized is GPU hardware? (5, Interesting)

moteyalpha (1228680) | more than 6 years ago | (#24356257)

I have been using my own GPU to do this very same thing by automatically converting images to vertex format and use the GPU to scale, shade, etc and in this way I can have a shape recognition by simply measuring the closest match on the frame buffer. There are more complex ways to use the GPU to do pseudo computation in parallel, I still think that a commonly available CAM or near CAM would increase neural like computations by being essentially a completely parallel process. It would be better to allow more people to experiment with the methods because the greatest gain and cost is the software itself and specialized hardware for a single purpose allows better profit but limits innovation.

Re:Just how specialized is GPU hardware? (0, Troll)

Louis Savain (65843) | more than 6 years ago | (#24356539)

GPUs use an SIMD (single instruction, multiple data) configuration. They perform best there is a huge number of independent entities that must be processed in parallel by applying the same operation on all of them simultaneously. Visual processing is a form of signal processing similar to graphics processing in which a stream of parallel data undergo the same transformation. The problem is that the minute you inject any kind of data dependency, your performance takes a major hit. For example, you would not want to use it to implement a parallel quicksort algorithm. So, the answer is no, GPUs are not for general purpose computing but they are fast in what they do and, unlike multithreading which is coarse-grained, they perform fine-grain parallelism. For an easy to read treatment of the two main types of parallel processors (SIMD and MIMD) read Nightmare on Core Street [blogspot.com] .

Re:Just how specialized is GPU hardware? (0)

TheRaven64 (641858) | more than 6 years ago | (#24356601)

A GPU executes shader programs. These are typically kernels - small programs that are run repeatedly on a lot of inputs (e.g. a vertex shader kernel would run on every vertex in a scene, a pixel shader on each pixel). You can typically run several kernels in parallel (up to around 16 I think, but I've not been paying attention to the latest generation, so it might be more or less). Within each kernel, you have a simple instruction set designed to be efficient on pixels and vertexes. These are both four-element vectors (red, green, blue and alpha channels for pixels and x, y, z, and w co-ordinates for vertexes, where w is used for normalisation). Each of the elements in this vertex is a single-precision floating point value.

If your algorithm involves a relatively small number of steps on each of a large input data set, each of the steps is assembled from groups of single-precision floating point operations performed simultaneously on four input values, and it has very few branches, then it will be insanely fast on a GPU. If it needs integer arithmetic, lots of branching, double-precision floating points, and lots of non-local memory accesses, then it will be incredibly slow. Most problems lie somewhere in the middle of these two extremes - how close they are to each end tells you how well they will perform.

becoming less specialized... (2, Insightful)

Mad Hughagi (193374) | more than 6 years ago | (#24358281)

The GPU architecture has been progressively moving to a more "general" system with every generation. Originally the processing elements in the GPU could only write to one memory location, now the hardware supports scattered writes, for example.

As such I think the GPGPU method of casting algorithms into the GPU APIs (CUDA et. al) are going to die a quick death once Larabee comes out and people can simply run their threaded codes on these finely-grained co-processors.

Meh... (0)

Anonymous Coward | more than 6 years ago | (#24356159)

All of the hardware (save the case) is stock. It's just two machines (with 4 video cards each) in one physical case.

Yet still can't get PhysX running on 2x8800M GTXs (1)

fostware (551290) | more than 6 years ago | (#24356385)

I'm still eager to see PhysX running on my dual 8800M GTX laptop. I've run all the drivers from 177.35 up and I'm running the 8.06.12 PhysX drivers as required.
Apparently it's just the mobile versions :(

Re:Yet still can't get PhysX running on 2x8800M GT (1)

bmgoau (801508) | more than 6 years ago | (#24356939)

Thats an easy problem to solve! Just wait for the technology to mature before purchas...Oh.

Powered by Nvidia! (1)

Hackerlish (1308763) | more than 6 years ago | (#24356465)

> 8x9800gx2s donated by NVIDIA.

I wonder how many BSODFLOPS (Blue screens of death per second) it can generate? ;)

http://byronmiller.typepad.com/byronmiller/2005/10/stupid_windows_.html [typepad.com] http://www.google.com.au/search?q=nvidia+'blue+screen+of+death'+nv4_disp [google.com.au]

Re:Powered by Nvidia! (1)

MoFoQ (584566) | more than 6 years ago | (#24356535)

vista or xp?

forget 9x!

Yeah but.... (0, Redundant)

magnus_1986 (841154) | more than 6 years ago | (#24356467)

... can it run Crysis on Vista at full settings? I kid, I kid!

Re:Yeah but.... (0)

Anonymous Coward | more than 6 years ago | (#24356517)

I'm actually seriously asking this question to myself... (checks to make sure he's not logged in ;) )

is it just me or .... (0)

Anonymous Coward | more than 6 years ago | (#24356473)

wouldn't that many gpu (8 on each ?) on a single motherboard fight for contention on the bus ? depends on the algorithm I suppose ... seems like a good excuse to try out the latest doom anyway

Re:is it just me or .... (1)

ezzzD55J (697465) | more than 6 years ago | (#24356639)

pcie isn't a bus, but is point-to-point.

machine or machines? (4, Insightful)

MoFoQ (584566) | more than 6 years ago | (#24356489)

is it me or do I see two separate mobos...which means it's two machines, 8 per machine in one box....not 16?

now...if it was 16 in one...now that would be amazing....otherwise...it's not...'cuz there was that other group that did 8 in 1 [slashdot.org] (aka...16/2 => 8/1)

Re:machine or machines? (1)

OneMadMuppet (1329291) | more than 6 years ago | (#24356979)

Yip, it's 2 machines in 1 case. The ethernet isn't crossed patched, and I don't see anywhere the motherboards are connected, so really they just have a really small cluster.

Re:machine or machines? (2, Informative)

sam0737 (648914) | more than 6 years ago | (#24357395)

That's one machine for simulating one eye. That's why they need 2 * 8 for simulating human-level vision, or else you won't get the 3D vision.

Re:machine or machines? (1)

owlstead (636356) | more than 6 years ago | (#24357509)

Maybe so, but why not build just two machines? The only reason I can think of is that this sounds cooler. Maybe they save a bit of money on having a single cooling solution/power supply, but I don't see it. Strange enough, the machine doesn't seem to be symmetric. They've probably put one motherboard upside down, otherwise you would have to split the case. Let's hope the magic doesn't leak out.

Re:machine or machines? (1)

sam0737 (648914) | more than 6 years ago | (#24357769)

It's symmetric, ...rotational symmetric [wikipedia.org] . ;P

Re:machine or machines? (1)

dave1g (680091) | more than 6 years ago | (#24357895)

well if you use the definition then none of the super computers built in the last decade count either, since they are all giant clusters

Here's what they need : (1)

unity100 (970058) | more than 6 years ago | (#24356541)

a system with 16 x 4870x2s. they will draw less energy too.

It will be used for Bomb Modelling (0)

Anonymous Coward | more than 6 years ago | (#24356595)

Read the major mandate of the Los Alamos research centre: "Our highly skilled management team of nuclear experts and industry leaders are focused on making Los Alamos the premier national security laboratory of the 21st century."

This isn't for gaming, this is for planning how to more cost effectively kill humans.

Re:It will be used for Bomb Modelling (0)

justinlee37 (993373) | more than 6 years ago | (#24357307)

Awesome! If we could kill everyone in the second and third world for pennies on the dollar, imagine how cheap real estate would get to be! Then we could buy, buy, buy and all get rich! Yeah!

Re:It will be used for Bomb Modelling (1)

Free the Cowards (1280296) | more than 6 years ago | (#24359821)

It's better than setting off real live nuclear weapons in the desert like they used to do.

Re:It will be used for Bomb Modelling (1)

im_thatoneguy (819432) | more than 6 years ago | (#24360093)

This isn't for gaming, this is for planning how to more cost effectively kill humans.

Let me fix that for ya:

This isn't for gaming, this is for planning how to more cost effectively threaten to kill humans.

Is that a helicopter? (2, Funny)

Anonymous Coward | more than 6 years ago | (#24356687)

God, they stuck so many fans into that box that I bet it takes off the ground when it boots.

quantum computer? (1)

Laxori666 (748529) | more than 6 years ago | (#24357423)

They should use the quantum computer described a few posts above, it seems to be especially designed for pattern matching that computer vision might require.

16 GPU'S? How about 64? (0)

Anonymous Coward | more than 6 years ago | (#24357909)

http://acceleware.com/newsEvents/newsreleasearchive/20080617clustersolution.cfm

Not a single informed comment on machine vision (0)

Anonymous Coward | more than 6 years ago | (#24358303)

Pathetic. Not my field, I can't comment either. But all those upmodded jokes and irrelevant and uninformed side discussions about stream processing on GPUs shows that /. modders often mod crap just because they have nothing else to mod up.

Hey, if all one has in a forum is total crap, mod nothing.

Worlds fastest supercomputer eh? (1)

bigplrbear (1179259) | more than 6 years ago | (#24358331)

the Roadrunner, the world's fastest supercomputer

But does it run Crysis?

Human Vision (1)

skelly33 (891182) | more than 6 years ago | (#24358961)

On June 30 of this year, The New Yorker magazine published a fascinating, if at moments disturbing article entitled The Itch [newyorker.com] . The article discusses, among other things, the human mind's perception of the reality of its environment based on the various nervous inputs it has, vision included. Apparently this is an oft debated topic among the scientific community, but it was new information to me.

One of the things I found intriguing was the note that the bulk (80%) of the neural interconnections going into the visual cortex of the human brain come not from the optic nerves themselves, but from other areas of the brain including "functions like memory." The suggestion is that the eyes provide visible light input, but that the brain's processing of what it is looking at is primarily an act of abstract object/pattern reconstruction.

If you couple this notion with the limitations of the human eye itself -- such as the fact that the finest resolution/detail comes from an incredibly narrow region directly in the center of the FOV with rapidly decreasing information towards the extremities that ranges from soft profiles to mere suggestions of color, brightness and movement -- it strikes me that if researchers at MIT wish to replicate the model of human vision on any level close to reality, their input and processing systems should actually be modeled a bit like the real deal.

I believe that without a solid neural net with strong pattern recognition, instant recall, massive parallel processing - the finer things of the visual cortex - that human vision will not be possible with semiconductors. Is 16, 32 or 100 GPU's substantial enough to pull it off? Probably not, I think - too much overhead, not enough interconnects... to coin a pun: "too RISCy". I must admit though that the research discussed in the technologyreview link is very interesting.

Where them *details* at? Please update w/ a link. (1)

schwaang (667808) | more than 6 years ago | (#24359851)

I looked through each of TFA's linked in the story, and I don't see any technical details on this system. Whereas when the FASTRA people at Univ. of Antwerp put together their 4 9800-GX2 system for CUDA, they published all the nitty gritty down to specific parts, etc. The pictures are interesting but not enough.

Question (0)

Anonymous Coward | more than 6 years ago | (#24359941)

So which one of those guys in the pics is Miles Dyson?

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>