Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Supercomputer On-a-Chip Prototype Unveiled

CowboyNeal posted more than 7 years ago | from the more-is-better dept.

Supercomputing 214

An anonymous reader writes "Researchers at University of Maryland have developed a prototype of what may be the next generation of personal computers. The new technology is based on parallel processing on a single chip and is 'capable of computing speeds up to 100 times faster than current desktops.' The prototype 'uses rich algorithmic theory to address the practical problem of building an easy-to-program multicore computer.' Readers can win $500 in cash and write their names in the history of computer science by naming the new technology."

cancel ×

214 comments

Sorry! There are no comments related to the filter you selected.

Name ? (2, Insightful)

Hsensei (1055922) | more than 7 years ago | (#19684499)

What's wrong with Supercomputer On-a-Chip (c) ?

Re:Name ? (1, Funny)

Aranykai (1053846) | more than 7 years ago | (#19684553)

I call it the Gargantu-Hertz Processor :P

Re:Name ? (2, Funny)

Anonymous Coward | more than 7 years ago | (#19684599)

What about people-ready chip?

Re:Name ? (1)

ozmanjusri (601766) | more than 7 years ago | (#19685109)

How much did you earn for that?

Re:Name ? (3, Funny)

DigiShaman (671371) | more than 7 years ago | (#19684719)

Supercomputer-On-a-Chip, or SOAC (pronounced soak).

"Need your data processed in a jiffy? Then SOAC your data on our new chip. All yours for $19.95*!

*sorry, no CODS accepted

I don't know much about marketing... (1)

NotQuiteReal (608241) | more than 7 years ago | (#19684731)

I think SOC would SUCK as a product name.

Re:I don't know much about marketing... (1)

normuser (1079315) | more than 7 years ago | (#19684927)

I think SOC would SUCK as a product name.

I agree. Since words are linked in my head by how they sound, SOC would just make me think of all the nasty clothes I have yet to wash.

Re:Name ? (4, Funny)

OctoberSky (888619) | more than 7 years ago | (#19685037)

Babywulf Cluster

Re:Name ? (4, Funny)

hAckz0r (989977) | more than 7 years ago | (#19685223)

What's wrong with Supercomputer On-a-Chip (c) ?

Oh great, I can hear the PR advertisements already; "Put a SOC in it".

Re:Name ? (0)

Anonymous Coward | more than 7 years ago | (#19685687)

Super Lucky Besto Computing Chip

Re:Name ? (1)

IdleTime (561841) | more than 7 years ago | (#19685747)

I like, for obvious reasons and it's quite appropriate here, "Deep Thought"

Re:Name ? (1)

KDR_11k (778916) | more than 7 years ago | (#19686155)

SOC is already taken for System On Chip? Maybe ScOC. No idea what's the difference between an ScOC and an MPSOC.

"Cell" (3, Insightful)

Doc Ruby (173196) | more than 7 years ago | (#19684509)

I call the "supercomputer on a chip" the "Cell microprocessor [wikipedia.org] ". Of course, next year, it won't be so super. But there will be a new one that's really super.

Re:"Cell" (1)

Spy der Mann (805235) | more than 7 years ago | (#19684563)

I know, I know! Let's call it the Goku(TM) microprocessor! :D

Re:"Cell" (0)

Anonymous Coward | more than 7 years ago | (#19684567)

So that'd be a Super Cell chip, right?
If Sony loses funding for it, they can always sell it to the Canadians and tell them it gives them an edge in high altitude weather forecasting!

Re:"Cell" (1)

dwarfsoft (461760) | more than 7 years ago | (#19685991)

"Buy! Buy! BUY! The Cell, Cell, CELL!"

Re:"Cell" (1)

b0101101001010000 (1082031) | more than 7 years ago | (#19685641)

Its very interesting reading the paper linked to the link http://www.umiacs.umd.edu/users/vishkin/XMT/spaa07 paper.pdf [umd.edu] . It reminds me of Mercury Computing Programming Toolkit for Cell Processor Programming. They too have a spawn and join method of concurrent programming see: http://www.mc.com/uploadedImages/MCF-FOE-model.jpg [mc.com] at http://www.mc.com/microsites/cell/ProductDetails.a spx?id=2824 [mc.com] . Notice the worker/manager similarity to the spawn/join semantic. It would appear that this chip is fundamentally the same, but provides implicit engine allocation. Very interesting....

I would name it vaporware technology (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#19684511)

I have a big ole dangling schvank

or as CmdrTaco fondly calls it, a big ole schvankenstein

which one of you tossers is up for giving it a workout?

Call it the iChip and everyone will want one (-1, Flamebait)

CrazyJim1 (809850) | more than 7 years ago | (#19684517)

Apple. I thought the only reason people used those was because they gave them away free to schools.

Taken? (3, Funny)

bryan1945 (301828) | more than 7 years ago | (#19684539)

"Readers can win $500 in cash and write their names in the history of computer science by naming the new technology."

Is "Clippy" taken?

Re:Taken? (3, Funny)

trolltalk.com (1108067) | more than 7 years ago | (#19685353)

Chipzilla would be good, except that's what everyone calls Intel. I guess we'll have to settle for "CowboyNealOnAChip". Or "theChipThatCanActuallyRunJavaProgramsWithinTheUni versesLifetime"

What gets me is that that there's a dropdown in the entry form to choose your country, as well as asking you for your state or province, but the rules state:

WHO MAY ENTER: Open to all legal residents of the 50 United States (including the District of Columbia) who are 18 years or older in their respective US state at time of entry. Individuals employed by the University of Maryland, College Park. ("University") as faculty, exempt or non-exempt employees, and members of their immediate family or persons living in the same household, are not eligible to enter or win.

I hope their chip design is better thought out than the contest form.

WTF? (4, Insightful)

msauve (701917) | more than 7 years ago | (#19684559)

We have microcomputers and supercomputers and nothing in between? Seems to be a bit of hyperbole involved here.

Re:WTF? (3, Funny)

gardyloo (512791) | more than 7 years ago | (#19684615)

We have microcomputers and supercomputers and nothing in between? Seems to be a bit of hyperbole involved here.
Most. Insightful. Post. Ever. ;)

WTF?-another history lesson? (0)

Anonymous Coward | more than 7 years ago | (#19685283)

"Most. Insightful. Post. Ever. ;)"

*smirk*

For all you youngsters, there is minicomputer.

Re:WTF? (0)

Seumas (6865) | more than 7 years ago | (#19684667)

Agreed. They are obviously presenting this as a user/consumer chip for the desktop. Hence the comparison to its speed over a desktop. This might be of great interest to the NSA and other government agencies that do domestic spying and for companies like Google, but what is even the high-end gamer going to need a chip 100 times faster than today's machines for any time in the next decade? And of course, it will be about a decade before this is even affordable for a consumer, anyway.

Maybe we can call it "blackout", since that's what these will probably do after sucking the power they need.

And wouldn't it be appropriate to label this story as the press release that it is?

Re:WTF? (5, Insightful)

Kadin2048 (468275) | more than 7 years ago | (#19685649)

but what is even the high-end gamer going to need a chip 100 times faster than today's machines for any time in the next decade?

If you compare megahertz-cores (number of megahertz times number of cores at that speed), I suspect that there's been almost a 100x increase in the past 10 years, at least if you look from the low end a decade ago to the high end of personal computers now.

I don't see why the next ten years would be any different. Operating systems will continue to get more bloated, software packages will get more feature-stuffed, games will continue to demand just slightly more than whatever's available to most people with expenses and regular lives, and most people will buy a new machine every few years based on whatever's on sale for $500 at Best Buy when their old one gets clogged with spyware.

Sure, 100x might be a bit of a stretch (I'm not sure whether silicon will go that much further and I'm not totally convinced that parallelism is the solution for general-purpose computing), but if that kind of power was available, it would be put to use.

Software expands to fill the resources made available to it, and then some. Always has and always will.

Re:WTF? (1)

EEPROMS (889169) | more than 7 years ago | (#19685597)

Micro->Mini->Supercomputer, Minis used to be small business systems with more than one cpu (not always) that interacts with a group of terminals or PC's.

The Cowardly Lion says.......... (0)

Anonymous Coward | more than 7 years ago | (#19684575)

Looks like a cluster on a single board. The cleaning analogy is kind of stupid. If I had 100 people cleaning my house at the same time, they wouldn't get shit done. New twist on old technology.

I'm not sleeping, I passed out from holding my breath.

Signed,

The Cowardly Lion

Re:The Cowardly Lion says.......... (1)

Meostro (788797) | more than 7 years ago | (#19684687)

The cleaning analogy is perfectly apt!

If 100 people cleaned your house, they "wouldn't get shit done".

If 100 people cleaned Prof. Vishkin's house, they would be finished in about 3 minutes.

How this is better than Intel's 80-core processor [arstechnica.com] remains to be seen. This "technology" looks like it's an overhyped version of GPGPU [gpgpu.org] or PhysX [ageia.com] .

My Name (5, Funny)

the eric conspiracy (20178) | more than 7 years ago | (#19684581)

'Space Heater'

Re:My Name (1)

Ice Wewe (936718) | more than 7 years ago | (#19684747)

Nah, that was the nickname for the Pentium 4 chip. I think we should hail the new, more energy efficient chips, besides, they can't exactly heat that much space anymore. How about a term more fitting to the amount of heat they put out, 'Hobo Heaters'? Then they'll stop begging for money, and start begging for large data files to process.

Re:My Name (1, Interesting)

the eric conspiracy (20178) | more than 7 years ago | (#19684921)

Actually power consumption per instruction has remained pretty constant over the years if you exclude the Pentium 4. The Yohah uses about the same amount of power per instruction as the Pentium. So if you are running 100 times more instructions per second, well you will be using 100 times more energy.

Name (1)

christurkel (520220) | more than 7 years ago | (#19684583)

Future Slashotting in the Waiting (FSW).

There's nothing here (2, Insightful)

IlliniECE (970260) | more than 7 years ago | (#19684597)

I RTFA... It seems to handwave so much about parallel computing, that it seems they haven't discovered anything. All i see is "clock frequency can't increase, so we're going parallel'.... Surely, this can't be the extent of their research. The article claims its 'easy to program', but there are zero specifics about why that would be the case. Can anyone tell me what they've done here (if anything)?

Re:There's nothing here (2, Interesting)

Holi (250190) | more than 7 years ago | (#19684679)

Well, you should learn to follow links.
It was quite easy from the article to find more information [umd.edu] about the project.

Parallel programming made easy ... (0)

Anonymous Coward | more than 7 years ago | (#19684947)

By redefining it.

Data parallel programming is a significant subset of parallel programming in general but it is relatively easy to get right to start with, so I don't see how XMT-C is such an advance.

Re:There's nothing here (0)

Anonymous Coward | more than 7 years ago | (#19685013)

haha, well I don't know about their hardware but the programming model isn't really anything new, and it certainly isn't parallel programming made easy. there's other parallel C extensions like UPC and Cilk that work similarly, this sort very convenient for data-parallel applications, but an elegant solution to the critical section problem it is not.

Re:There's nothing here (1, Insightful)

IlliniECE (970260) | more than 7 years ago | (#19685351)

And people who write articles should learn to write them more thoroughly. If the article doesn't look promising, i'm not going to spider across the web collecting as much as I can on it.

Re:There's nothing here (4, Informative)

James McP (3700) | more than 7 years ago | (#19685473)

Here's the deal.

Up 'til now, Parallel Random Access Model (PRAM) computing has been a theory of parallel processing that was a thought model. It hadn't been built. Some people had written programs to emulate a PRAM computer but they were not complete versions.

It could work at a snail's pace and still be a technological accomplishment as it is the very first, complete, working, hardware PRAM computer. It's on par with the Z3, Colossus and Eniac, the first programmable computers (German, English, American, in historical order).

Fortunately, they made the algorithms work well, or at least, if the press release it to be believed, work so that 64 75Mhz computers could produce 100x the performance of a current desktop on at least one particular function. Which is pretty impressive in first-time hardware even if it turns out to be an obscurely used math function known only to about a dozen coders.

Limited Practical Applications (for now) (1, Interesting)

thesandbender (911391) | more than 7 years ago | (#19684601)

Assuming this actually works as detailed and the fine print on the claim isn't too onerous, there's three practical problems:

1. Many applications are limited by the speed of the user, not the computer. You can only type or click so fast.
2. Hardware would have to catch up to drive this beast. This would max out all known memory and storage systems. Not to mention your internet connection.
3. As has been mentioned time and again, until developers actually embrace multi-threading this will be relatively useless. Tests from various hardware sites have shown that going from the Core 2 Duo to the Core 2 Quad offers very little benefit except for a very small subset of users... who should probably be running workstations anyway (Video editing, 3D rendering, etc.)

However, I have a ton of HD content on my MythTV box that I would like to turn this processor and h264 loose on :) Maybe by the time this is a viable commercial product it will have more practical uses. (Remembering LOGO on my TI-94/A... we've come a long way baby)

Re:Limited Practical Applications (for now) (4, Insightful)

p0tat03 (985078) | more than 7 years ago | (#19685023)

While I agree there are certain leaps to be made before this can be a mass market item, I disagree fundamentally with point 1 that you make. You could have made the exact argument about the old DOS Lotus office suite way back, 15 years ago. Those things still word process, and a 386 33MHz is certainly no slouch - I never had to sat around waiting for the software to respond to me or finish some ridiculously long task.



I'm sure you'd agree that these newfangled Pentiums and Core Duos are quite useful, even for the end user.



Think about features like predictive and contextual actions. Desktop search? Search-as-you-type? There are many ways to improve the usability of computers thyat require more and more performance. Honestly, if we can invent faster computers, we will invent ways to put the power to use in a productive, tangible way.

Re:Limited Practical Applications (for now) (1)

thesandbender (911391) | more than 7 years ago | (#19685465)

I agree... to a point... but I'm wondering where the limit is. You mentioned four possible applications. Let's be generous and say we broke that off to four threads for each tasks... sixteen threads. Lets be even more generous and say there were four more tasks you didn't consider. All told that's thirty-two threads... a tenth of the power were talking about here. And... I'll go back to my second point. Currently, there are no memory or storage systems that are capable of feeding this. If it really is a 300x increase in processing power then moore's law predicts it will be almost a decade before current approaches can actually support this.

Re:Limited Practical Applications (for now) (2, Informative)

Morty (32057) | more than 7 years ago | (#19685147)


3. As has been mentioned time and again, until developers actually embrace multi-threading this will be relatively useless. Tests from various hardware sites have shown that going from the Core 2 Duo to the Core 2 Quad offers very little benefit except for a very small subset of users... who should probably be running workstations anyway (Video editing, 3D rendering, etc.)


RTFA. The article claims:


    "The 'software' challenge is: Can you manage all the different tasks and workers so that the job is completed in 3 minutes instead of 300?" Vishkin continued. "Our algorithms make that feasible for general-purpose computing tasks for the first time." ...
To show how easy it is to program, Vishkin is also providing access to the prototype to students at Montgomery Blair High School in Montgomery County, Md.


Parallel computing has been around for a while. One of the challenges of parallel computing has always been that it is inherently harder to code. These guys acknowledged this, but they say their prototype is "easy" to program. We'll see if they're right.

Re:Limited Practical Applications (for now) (4, Informative)

thesandbender (911391) | more than 7 years ago | (#19685399)

I'm going to make an assumption and say that you don't do a lot of system programming. Threaded applications depend... heavily... on synchronizing data access. You simply can't take a single threaded application and break it out across threads without having some context of how it's accessing it's data and why. Imagine landing planes at an airport. It's a serial process... you just can't arbitrarily run it in parallel... "bad things" (tm) happen. The "algorithms" Mr. Vishkin is speaking of have no way of determining the context of code being executed and trying to break it out is a disaster waiting to happen.

There are applications where massive parallelism like this is fantastic... using my initial example... encoding video. Throw each frame off to one of the processors and you're processing 300 at a time (even there there are limitations because each frame requires information from the previous).

But I stand my statement.. anyone who says they can take a serial application and run it in parallel is full of sh*t and they know it. In certain, limited circumstances, yes... but in general. NO.

Confidence: Low (5, Funny)

Lije Baley (88936) | more than 7 years ago | (#19684605)

Vaporac. Vaporlon. Vaporium. Whatever...

Re:Confidence: Low (1)

edwardpickman (965122) | more than 7 years ago | (#19684907)

This brings up a good point. Will Duke Nuke Em Forever require this chip? It's likely to be on the minimum specs for Windows 2012.

Re:Confidence: Low (1)

ScrewMaster (602015) | more than 7 years ago | (#19684929)

Only if you have the Smokum Mirrorum add-on.

Re:Confidence: Low (0)

Anonymous Coward | more than 7 years ago | (#19685007)

Good god man, is your sig a subtle reference to Spongebob Squarepants?

...If it's not, I am deeply ashamed of myself. Of course, if it is, I may bring shame on my entire family for recognizing it.

U is for uranium! ...BOMB!

Re:Confidence: Low (1)

Lije Baley (88936) | more than 7 years ago | (#19685039)

All...Hail...Plankton.

Re:Confidence: Low (2, Funny)

Refenestrator (1060918) | more than 7 years ago | (#19685011)

Or you could add in a temperature joke and call it the Vaporizer.

Re:Confidence: Low (1)

cli_rules! (915096) | more than 7 years ago | (#19685087)

Unobtanium!

i860? (2, Interesting)

Evil Pete (73279) | more than 7 years ago | (#19686005)

Anyone remember the hype of the i860 [wikipedia.org] ? Great on paper, but not so great in reality. I really hope this works though, von Neuman architecture was always supposed to be a stop-gap (even vN said so I think).

Duh. (0)

Anonymous Coward | more than 7 years ago | (#19684651)

Supercomputing 2.0. Now, I'd like that 500 bucks in twenties, please.

Uhm, whatever it's always been called? (1)

Cafe Alpha (891670) | more than 7 years ago | (#19684665)

Hard to tell from the some of those "papers" since they seem to be written for kindergarteners - or journalists. But with that much parallelism I'm guessing that these computers basically allow "dataflow" style programming, with a certain amount of automatic decomposition, similar to the way PC chips decompose assembly into a simpler language on-chip.

Re:Uhm, whatever it's always been called? (1)

JimXugle (921609) | more than 7 years ago | (#19684961)

Hard to tell from the some of those "papers" since they seem to be written for kindergarteners - or journalists.


wait... there's a difference?

I don't know about you guys... (1)

Ub3rT3Rr0R1St (920830) | more than 7 years ago | (#19684675)

But I want those $500. Maybe I could use it to buy a board with a chip that will actually provide some routine functionality on a shorterm scale. Wouldn't that be the ultimate irony?

I name it (3, Funny)

Kohath (38547) | more than 7 years ago | (#19684681)

Bob

Contention Management Issues (1)

MarkPNeyer (729607) | more than 7 years ago | (#19684685)

All the processors in the world won't do you any good if you can't write the software to harness them, and conventional lock-based techniques are really really easy to screw up. I'm really curious to see what those 'rich algorithmic' solutions they've got are.

Human-guided autovectorization. (3, Interesting)

Ayanami Rei (621112) | more than 7 years ago | (#19684765)

You know, autovectorization looks good on paper. But for most tasks, it really doesn't net you any benefit unless you can separate all your work into non-overlapping chunks. You can't have any interdependancies on your working set (or risk expensive, non-scalable locking), and if you're all pulling from a single data source to split up the analysis work you'll spend a lot of time in contention for the pipe to that resource.

For example, it wouldn't make searching a database (scratch that, searching any data set) any faster unless the index was already pre-split among the processing units.

In this architecture the processing units have the same bus to RAM and disk on the front and back ends and have to deal with contention.

Your system is only as fast as the slowest serial part. Typically this is storage media, a network connection, or a memory crossbar. Processors really are fast enough for the non-embarrasingly parallel stuff. They are at the right ratio with respect to the other slower busses to do most general purpose work.

If you want to do more than that then its other things; storage media, memory, I/O busses -- that need to be multiplied in density and number. Only then can we see higher throughput.

Autovectorization is only good for things we already have offloading for anyway (TCP encryption, graphics, sound)... and for those general purpose cases like in Game AI where you might want a linear algebra boost NVidia has beaten these guys to the punch with the GP stream processing in the newest chips and the very flexible Cg language/environment.

Overhyped (5, Insightful)

rivenmyst137 (467812) | more than 7 years ago | (#19684723)

Oh, for god's sake. I don't understand why this is getting so much press. It was stupid when it went up on Digg, and it's stupid that it's showing up here. This isn't substantially different from any of the other parallel architecture and programming work that's been going on for the last two decades. Their benchmarks are against embarrassingly parallelizable algorithms like matrix multiplies and randomized quicksort, things that any half-intelligent lemur (with a math and cs class or two) could get to run quickly. The hard part is speeding up your average desktop application which, I guarantee you, is not spending the majority of its time doing matrix multiplies.

On top of that, their "parallel extension of von Neumann" amounts to adding primitives to start and stop threads into the language. Again, any half-intelligent lemur (with a slightly different skill set from the first) could have done that. And I think a few actually have (at the risk of comparing language researchers to lemurs). It doesn't solve the underlying problem.

Oh, and did we mention no floating point and the lack of any memory bandwidth to get data into and out of this thing?

This is over-hyped research and shameless self-promotion, and for some weird reason the press seems to be buying it. Stop it.

Re:Overhyped (0)

Anonymous Coward | more than 7 years ago | (#19685053)

You make matrix multiplies and randomized quicksort sound like trivial implementations on parallel hardware. I promise you, however, it's not as simple as you make it sound.

I've never, not even once, met a lemur that could do that. Ocelots: Yes, ocelots could do it, but not lemurs. Even ocelots would need some remedial linear algebra and algorithms tutoring.

Re:Overhyped (1)

phantomfive (622387) | more than 7 years ago | (#19685057)

This is over-hyped research and shameless self-promotion, and for some weird reason the press seems to be buying it

Because it's a contest. Free publicity. Hooray!

Their benchmarks are against embarrassingly parallelizable algorithms like matrix multiplies and randomized quicksort, things that any half-intelligent lemur (with a math and cs class or two) could get to run quickly

Dang what kind of lemurs do they have where you're from? We must find them and make them our president! Oh wait, you say we already did?

OK I admit it, that was low.

Re:Overhyped (4, Informative)

Doppler00 (534739) | more than 7 years ago | (#19685345)

Yeah this article is pretty week. "Woohoo! Look we took a picture of a last generation FPGA development board and wrote some nifty programs for it that prove our pet project!" I think very little of things like this make it outside of academia. I'm not saying this research is unworthy, just not news worthy.

And "parallel extension of von Neumann" exists. It's called OpenMP and it still takes a skilled programmer to understand.

Look at that board... it uses "SmartMedia" yeah... that means that:

1. This is OLD research
2. The board developers didn't have a clue
3. A very old development board is being used.

Re:Overhyped (1)

uarch (637449) | more than 7 years ago | (#19685471)

After skimming through the whitepapers I have to agree with you.

It reminds me a little of the dataflow architectures of the 70's. A quick google search will probably give you several reasons why it wasn't very effective in the real world. This design will suffer from many of the same problems.

These are the types of white papers we used to tear apart for fun when I was in grad school. They boast all these breakthroughs that aren't very different from anything else that's done (not uncommon even when great work has been done) and they avoid any mention of (let alone solutions to) all the problems associated with their approach. The benchmarks they're using to gauge performance just make it even funnier.

Re:Overhyped (2, Funny)

uarch (637449) | more than 7 years ago | (#19685489)

Actually, the more I think about it they could have made a better whitepaper using this:

http://pdos.csail.mit.edu/scigen/ [mit.edu]

They should call it... (1)

kobatan (1103577) | more than 7 years ago | (#19684733)

kobatan.

I wonder if they can get the domain cheaply?

Analogy at work... (1)

RuBLed (995686) | more than 7 years ago | (#19684741)

"Suppose you hire one person to clean your home, and it takes five hours, or 300 minutes, for the person to perform each task, one after the other," Vishkin said. "That's analogous to the current serial processing method. Now imagine that you have 100 cleaning people who can work on your home at the same time! That's the parallel processing method."


Brilliant! Even my mother had not thought of such an idea.

Where parallelisms break down (2)

EmbeddedJanitor (597831) | more than 7 years ago | (#19685041)

Suppose you had 100 cleaners in your house. They'd all be tripping over each other and all unplugging eachother's vacuum cleaners to plug in their own. And all their minivans would cause a traffic jam in your driveway.

Pretty much the same with any multi-processor technology: shared resources like buses are the major limitation.

Re:Where parallelisms break down (2, Interesting)

rbanffy (584143) | more than 7 years ago | (#19685591)

Sun had something with tiny radio interconnects between chips. This way, they could have thousands of "pins" on the chip and the only metal pins you would need would be power and ground. If I remember correctly, I had a server whose memory had to be upgrades about 8 (or 9) modules-with-lots-of-pins a time, so, wide buses are nothing new.

Intel also had something about optical interconnects, which are also nice, since you can place your "connectors" anywhere in the chip and not just around the borders and, if you can aim properly, the receivers can be much smaller than the pads around a current chip (or, by properly spreading the signals, one could synchronize many receivers to a single source very efficiently).

We may not be constrained by the number of pins a connector has for that much longer.

Re:Analogy at work... (1)

Repton (60818) | more than 7 years ago | (#19685049)

"Suppose you hire one person to clean your home, and it takes five hours, or 300 minutes, for the person to perform each task, one after the other," Vishkin said. "That's analogous to the current serial processing method. Now imagine that you have 100 cleaning people who can work on your home at the same time! That's the parallel processing method."

The kitchen cleaner will grab the bucket and the bathroom cleaner will grab the mop, and neither will be able to get any work done. The rest will be tripping over each other in the hallways, and spend half their time queueing for the toilets...

How about (1)

jshriverWVU (810740) | more than 7 years ago | (#19684759)

"OMG I gotta have It (TM)" or Deep Silicon :)

YuO fail it (-1, Troll)

Anonymous Coward | more than 7 years ago | (#19684845)

We'll be able to and arms and dick dim. Due to the of Jordan Hubbard Pooper. Nothing look at your soft, are almost is dying.Things More. If you feel by the politickers beyond the scope of problem stems That supports it will be among Startling turn start a holy war

MISCELLANEOUS CONDITIONS: (0, Offtopic)

DrunkenTerror (561616) | more than 7 years ago | (#19684935)

All entries become the property of the University and will not be returned. By participating, entrants agree to abide by and be bound by these Official Rules and the decisions of the University, which shall be final and binding with respect to all issues relating to this Contest. It is your responsibility to ensure that you have complied with all of the conditions contained in the Official Rules. The University is not responsible for any lost, late, misdirected, stolen, illegible, incomplete entries, or for any computer, online, telephone or technical malfunctions that may occur. The University is not responsible for any incorrect or inaccurate information, whether caused by website users, any of the equipment or programming associated with or utilized in the Contest, or any technical or human error which may occur in the processing of submissions in the Contest. The University assumes no responsibility for any error, omission, interruption, deletion, defect, delay in operation or transmission, communications line failure, theft or destruction or unauthorized access to, or alteration of, entries. The University is not responsible for any problems, failures or technical malfunction of any telephone network or lines, computer online systems, servers, providers, computer equipment, software, email, players or browsers, on account of technical problems or traffic congestion on the Internet, at any website, or on account of any combination of the foregoing. The University is not responsible for any injury or damage to participants or to any computer related to or resulting from participating or downloading materials in this Contest. If, for any reason, the Contest is not capable of running as planned, including infection by computer virus, bugs, tampering, unauthorized intervention, fraud, technical failures, or any other causes beyond the control of Contest which corrupt or affect the administration, security, fairness, integrity or proper conduct of this Contest, the University reserves the right at its sole discretion to cancel, terminate, modify or suspend the Contest and select winners from among all eligible entries received prior to the cancellation. Persons found tampering with or abusing any aspect of this Contest, or whom the University believes to be causing malfunction, error, disruption or damage will be disqualified. CAUTION: ANY ATTEMPT BY AN ENTRANT OR ANY OTHER INDIVIDUAL TO DELIBERATELY DAMAGE ANY WEBSITE OR UNDERMINE THE LEGITIMATE OPERATION OF THE CONTEST MAY BE A VIOLATION OF CRIMINAL AND CIVIL LAWS. SHOULD SUCH AN ATTEMPT BE MADE, SPONSOR RESERVES THE RIGHT TO SEEK DAMAGES FROM ANY SUCH PERSON TO THE FULLEST EXTENT PERMITTED BY LAW. The University reserves the right to correct any typographical, printing, computer programming or operator errors.

Non-US residents inelligible to enter (2, Informative)

bh_doc (930270) | more than 7 years ago | (#19684977)

Second paragraph of the rules:

THE FOLLOWING CONTEST IS INTENDED FOR PLAY IN THE UNITED STATES AND SHALL ONLY BE CONSTRUED AND EVALUATED ACCORDING TO UNITED STATES LAW. DO NOT ENTER THIS CONTEST IF YOU ARE NOT LOCATED IN THE UNITED STATES.

Even though there is a country field in the form. WTF?

They don't mention that on the form page, either. It peeves me just a little bit that they would do that, I mean, how many people actually read these conditions things, anyway? Can't say I'm surprised, though.

Re:Non-US residents inelligible to enter (0)

Anonymous Coward | more than 7 years ago | (#19685921)

I'm guessing that they don't want liability for all the stuff they disclaim against in other countries.

A modest proposal (1, Funny)

Anonymous Coward | more than 7 years ago | (#19684991)

Call it Grendel - it has no ARM

hmm (1)

hansoloaf (668609) | more than 7 years ago | (#19685001)

how about naming it Vizi?

Re:hmm (1)

Ecuador (740021) | more than 7 years ago | (#19685437)

If they run "Vizi" through their Greek department, they will find out it means "boob" in Greek.

$500??? (0)

Anonymous Coward | more than 7 years ago | (#19685009)

I could get more than that for naming my neopet.

How about....Glumphoof

An algorithm came up with it.

How About "Almost Fast Enough For Vista"? (1)

NeverVotedBush (1041088) | more than 7 years ago | (#19685063)

But I doubt that's worth $500...

Skynet or Borg (1)

detain (687995) | more than 7 years ago | (#19685235)

Skynet or Borg both great recognizable names refering to a massive supercomputer, or perhaps a massive cluster of nodes, either way , both those names would pwn. resistance is futile

Here's your name: (0)

Anonymous Coward | more than 7 years ago | (#19685237)

I dub thee: SKYNET

crunch data at off peak times (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#19685267)

My present PC does all my tasks just fine, and I have no desire to speed-up any part.

I schedule my DIVX jobs for night-time processing, and all my videos are ready in the morning. I have found it more beneficial to learn to crunch my data at off-peak times, than to speed up the process (from a money perspective). This means I need to anticipate when I need a result, and schedule it accordingly.

This is tough when I play games, as I have to script all my movements in advance... but my computer is set to the highest resolution without any hardware upgrades (the downside is that I can only see 1fps, but I trust that I am kicking major ass!).

I call it the iStove (1)

ILuvRamen (1026668) | more than 7 years ago | (#19685313)

okay first....

The prototype developed by Uzi Vishkin and his Clark School colleagues uses a circuit board about the size of a license plate on which they have mounted 64 parallel processors
Sell the rights to mac, make the processor board a remote device, and cook some eggs on that sucker. Seriously, they say software is the hardest challenge? How about keeping 64 processors on a license plate sized board cool.

Transparent Parallelism? (1)

gandracu (951016) | more than 7 years ago | (#19685371)

Transparel.

Please vote on the new name (3, Funny)

cashman73 (855518) | more than 7 years ago | (#19685407)

I will either nominate the name, "Giant Douche," or, "Turd Sandwich," depending on which one slashdotters vote for.

This is just an old FPGA development board (1)

Doppler00 (534739) | more than 7 years ago | (#19685419)

http://www.dinigroup.com/index.php?product=DN8000k 10pci [dinigroup.com]
There you go! It's just a vertex 4 development board. Nothing special. I mean, if they would have used this graphic http://www.dinigroup.com/DN9000k10PCI.php [dinigroup.com] it would have been a little more impressive.

Name (1)

partowel (469956) | more than 7 years ago | (#19685537)

I call thee.....ummmmm.....

data.

i mean.....

The Terminator.

nah....lets go with....

HAL 9000.

oh boy oh boy....lets call it Matrix 001.

Some possible names (1)

YU Nicks NE Way (129084) | more than 7 years ago | (#19685659)

"VaporWire"

"Parallel Lies Processor"

"iProcessor"

Hand over the $500 right now (5, Funny)

Enderandrew (866215) | more than 7 years ago | (#19685735)

iPerbole©

I think I've got a name for it... (1)

Whuffo (1043790) | more than 7 years ago | (#19685743)

How about "Wishful Thinking"?

They describe the same old massively parallel computing idea but gloss over the problems involved. This old chestnut keeps coming to the surface every few years but nobody ever seems to show any working hardware...

What about iCPU? (1)

trunks14 (1116341) | more than 7 years ago | (#19685821)

What about iCPU? has some other company already done the 'i' prefix thing? i mean like iPod or something like that?

Deep CPU
MP (Moon MacroProccesor)
MPU (Macrohard Proccesing Unit)

Power (1)

fr4nk (1077037) | more than 7 years ago | (#19685895)

'capable of computing speeds up to 100 times faster than current desktops.'

So, how many laptop miles are this? If it has more power than one laptop mile, they could name it 'Milestone Computer'!

name? (0)

Anonymous Coward | more than 7 years ago | (#19685957)

xyzzy
that word noone can pronounce.
from advent :)

Oblig (0)

Anonymous Coward | more than 7 years ago | (#19685963)

How about "Puter," as in, "What? Did your mother purchase for you a PUTER for Christmas?"

Hmmm.... (1)

bjackson1 (953136) | more than 7 years ago | (#19685995)

A supercomputer on a chip....so it should be named Altivec?

you FAIL I t (-1, Troll)

Anonymous Coward | more than 7 years ago | (#19686141)

antibacterial soap. Though I ha7E never has run faster

FPGAs (2, Informative)

CompMD (522020) | more than 7 years ago | (#19686149)

It appears to be a few FPGAs. With FPGAs, you can optimize the logic to represent algorithms for faster execution that on general purpose processors. Simply, you use more of the gates available on the chip. That appears to be what these guys are doing. It also appears that there is a single memory controller (I think that is what the QuickLogic chip is) and there is only one DRAM module installed on the board. It would be interesting if the board had a unified memory architecture. There is a separate Xilinx Spartan FPGA on the board that does who-knows-what, but I wouldn't be surprised if it was involved in communication with the processing chips. Of course, this is speculation, but it would seem logical for a board layout.

Just my thoughts.

names (1)

morlock_man (884105) | more than 7 years ago | (#19686171)

The Synchronicity System

of

The Simultaneity System

Or some combination of those words.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?