Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Star Bridge FPGA "HAL" More Than Just Hype

CowboyNeal posted more than 11 years ago | from the smoke-clearing-mirror-disappearing dept.

120

Gregus writes "Though mentioned or discussed in previous /. articles, many folks (myself included) presumed that the promises of Star Bridge Systems were hype or a hoax. Well the good folks at NASA Langley Research Center have been making significant progress with this thing. They have more info and videos on their site, beyond the press release and pictures posted here last year. So it's certainly not just hype (though $26M for the latest model is a bit beyond the $1,000 PC target)."

Sorry! There are no comments related to the filter you selected.

Greets 2 Gobbles (-1, Troll)

Anonymous Coward | more than 11 years ago | (#5308838)

This Post, Although perhaps not the first, Is Worthy!

Digital Teenz (-1, Offtopic)

Anonymous Coward | more than 11 years ago | (#5308840)

And Digital Teenz [digitalteenz.com] will be the first to use this technology. Remember Digital Teenz [digitalteenz.com] is where its at!

Re:Digital Teenz (-1, Offtopic)

Anonymous Coward | more than 11 years ago | (#5309003)

holy shit thanks. i just jerked my schlong to those pics.

thanks for some great wank material!

Can you (-1, Redundant)

Anonymous Coward | more than 11 years ago | (#5308841)

Imagine a beowulf clu. . . oh nevermind.

Re:Can you (0)

Anonymous Coward | more than 11 years ago | (#5309186)

Let's see now.. $26M.. that's 26 000 thousands.
So lg 26 000 is 14.666. Then multiply by 1.5 years per doubling of purchasing power to get 21.99 years. This computer should cost a thousand of todays dollars in 22 years. I'll put in a pre-order for my grandchild. I'll order the beowulf cluster for my great grandchild.

uhh, (5, Funny)

Anonymous Coward | more than 11 years ago | (#5308842)

uhh, so which link is the story?

Re:uhh, (0)

Anonymous Coward | more than 11 years ago | (#5308902)

Amen to that.

Re:uhh, (1)

paulcammish (542971) | more than 11 years ago | (#5309515)

uhh, so which link is the story?

I have no idea - I was going to make a 'HAL9000 from 2001' comment here, but Im worried it might be actually On Topic...

Re:uhh, (1)

ryochiji (453715) | more than 11 years ago | (#5310500)

Take a look here:
http://hummer.larc.nasa.gov/acmbexternal/Personnel /Storaasli/images/HALNews.html [nasa.gov]

If you watch the "speedup" movie, the guy talks about processing speeds equivalent to "100,000 gigs" (not sure if it's GHz or GFLOPS or what though) that sounds aweful fast. The demo shows the thing calculating fractals 35x faster than a PC while consuming only 0.1% of the resources.

Obviously, I have no clue how this thing works other than that its mighty fast. I'm also thinking that with a bunch of these things, cracking RSA might not be so difficult after all.

NFP (-1, Troll)

Anonymous Coward | more than 11 years ago | (#5308847)

Not first post!

$26M ...just a drop in the bucket (5, Funny)

Superfarstucker (621775) | more than 11 years ago | (#5308854)

26M? hah! i save that much every year pirating software and audio off the net.. puh-leez!

Re:$26M ...just a drop in the bucket (1)

SB5 (165464) | more than 11 years ago | (#5309561)

26M? hah! i save that much every year pirating software and audio off the net.. puh-leez!


and Kevin Mitnick did more than 5 times that in damages in the mid nineties, so it can't be that hard.

What is Star Systems? (5, Insightful)

$$$$$exyGal (638164) | more than 11 years ago | (#5308855)

Star Bridge Systems is the leading developer of truly parallel Hypercomputers.Our patent-pending hardware, software and Viva programming language are reinventing computer programmability to create the fastest, most versatile and energy-efficient computing systems available for solving many problems that require high computational density.

That's directly from their site. I wish the /. summary would have mentioned parallel hypercomputers. And note that when you search Google for "parallel hypercomputers", you only get get the one hit from Star Bridge Systems (and soon you'll get a hit for this comment on /. ;-)). No wonder people thought this was a hoax.

--sex [slashdot.org]

Re:What is Star Systems? (1)

TopShelf (92521) | more than 11 years ago | (#5308887)

It's amazing sometimes the things that DO get posted, considering the number of interesting stories that get rejected. Maybe it's a weekend thing...

Not that this technology isn't interesting, but the writeup above is awful!

What is a "hypercomputer"? (1)

maddogsparky (202296) | more than 11 years ago | (#5308921)

Hyper as in "hyper cube" (cube in more than three dimentions where every vertex is connected with every vertex that is parallel to it)? Hyper as in hyper velocity (mach 10 and above)? Hyper as in spastic or really, really, really neato?

Is "hypercopmuter" a real word with a standardized definition?

Re:What is a "hypercomputer"? (1)

moonbender (547943) | more than 11 years ago | (#5308977)

Is "hypercopmuter" a real word with a standardized definition?
Hypercopmuter? Perhabs a cop commuting through space and time? SCNR. I think the hyper is not based on any scientific or mathematic definition, they just picked it up because they thought it sounded cool. Their product overview page [starbridgesystems.com] defines what they think hypercomputers are:
Our products include the implementation of the relatively new computer chip, the FPGA (Field Programmable Gate Array) along with our patented Viva software, to form what we term 'Hypercomputers'. These machines are capable of truly extraordinary computational feats. The result is simply the creation of a new kind of computer system that gives users tremendous power with an intuitive, state-of-the-art software tool.

Re: "hypercopmuter" (sic) (1)

PetiePooo (606423) | more than 11 years ago | (#5308983)

Is "hypercopmuter" a real word with a standardized definition?

Never heard of it. But anything to quiet those pesky, over-zealous, redneck sheriff's deputies sounds good to me!

Sorry, couldn't pass it up...

Re:What is a "hypercomputer"? (2, Informative)

$$$$$exyGal (638164) | more than 11 years ago | (#5308989)

Here's the google search for only the word: "hypercopmuter [google.com] "

Your original search: hypercopmuter returned zero results.
The alternate spelling: hypercomputer returned the results below.

Here's a Feb'1999 Wired Article [wired.com] that explains what Star Bridge considers a hypercomputer.

--naked [slashdot.org]

Re:What is a "hypercomputer"? (1)

SloWave (52801) | more than 11 years ago | (#5309560)

I think the keyword here is "Hype"

Re:What is a "hypercomputer"? (1)

You're All Wrong (573825) | more than 11 years ago | (#5310083)

Too bloody right!
"""
It is called a fractal architecture, where the structure of the lower level is repeated at the higher level.
"""

Wow - they've reinvented the binary tree. But given it a new modern name. I'm _sooooo_ happy for them.

YAW.

Re:What is a "hypercomputer"? (1)

You're All Wrong (573825) | more than 11 years ago | (#5310115)

Note, I did not mean toimply that the topology they use is a tree, but that a tree could also be called a fractal too.

I was just overly annoyed at them as they told me that I'd "Loaded page in 0.012 seconds", when it took about 2 fucking seconds. That means:
a) they're liars
b) they're tossers for making such a fucking stupid statement.

Anger now vented. Back as you were.

YAW.

Re:What is Star Systems? (0)

Anonymous Coward | more than 11 years ago | (#5309070)

Reading their pages, it really does look like a hoax. All they seem to do is use a lot of made up words, and vaguely reference topics that are "hot" and fashionable in computing.

Re:What is Star Systems? (1)

Alan Partridge (516639) | more than 11 years ago | (#5309338)

why would you think it was a hoax? Anyone who's used a device that relies on the throughput KNOWS that there's an AWFUL lot of potential there - one obvious example is Quantel's range of video editing machines - conventional host CPU based systems cannot compete for speed, but certainly win for flexibility. If the FPGAs become "configurable" by way of a new programming technique, I can't see any reason why it shouldn't provide a good solution for HPC applications.

Re:What is Star Systems? (1)

You're All Wrong (573825) | more than 11 years ago | (#5310144)

What happended to the reseachers in Oxford (not pure departmental research, they'd spun out an independent company with university backing) who had something along the lines of "dynamically compiled C onto FPGAs" about 4 years ago?

(It was probably more like Occam than C, to be honest, as parallelism was a given. However, Car Hoarne was not involved in this spin-off.)

YAW.

Re:What is Star Systems? (2, Insightful)

SloWave (52801) | more than 11 years ago | (#5309551)

Sounds like a Unisys type company setting themselves up for another bogus IP land grab.

In SOVIET DALI (-1)

Anonymous Coward | more than 11 years ago | (#5308860)

WATCHES melt YOU

Innovative variation (0)

Anonymous Coward | more than 11 years ago | (#5308935)

I appreciate it.

Finally a solution! (4, Funny)

TheRaven64 (641858) | more than 11 years ago | (#5308866)

Here we see the solution to the problem of too many comments about a /. story. Simply obfuscate the story so much that no one can figure out what it's about, or even find the link to the original. Hats off to Gregus and CowboyNeal for the idea.

Re:Finally a solution! (0)

Anonymous Coward | more than 11 years ago | (#5308909)

this story was not meant for newbies, k thx! Those of us that have been participating since before 2002 are well aware of Star Systems and the crazy promise their FPGA monsters promise. Seemed too good to be true, and they have always had the crappiest ~1994 web page, 50 employees and are based out of Utah or some such sorry place. But if NASA is taking them seriously, it's news.

Re: Finally a solution! (1)

Black Parrot (19622) | more than 11 years ago | (#5309236)


> Here we see the solution to the problem of too many comments about a /. story. Simply obfuscate the story so much that no one can figure out what it's about, or even find the link to the original. Hats off to Gregus and CowboyNeal for the idea.

Yeah, but it sure makes it hard to figure out who to flame for not reading the story.

Re:Finally a solution! (1)

BryanL (93656) | more than 11 years ago | (#5309330)

Amen! I was expecting a story ala Stargate. Ah, traveling to other worlds via a Star Bridge Portal.

Heh (4, Funny)

code shady (637051) | more than 11 years ago | (#5308868)

I like that little "page loaded in x seconds" blurb in the corner.

I'm having waaay more fun then i should be refreshing the page and watching the load times get longer . . . and looooonnnnger . . . . and looooonnnnnnngggggggger.

Hey, it beats workin'.

Re:Heh (1)

killionk (75557) | more than 11 years ago | (#5308962)

Does anyone know how they did that? The numbers change when you refresht he page. I looked at the source and all it had was...

Loaded page in 0.011 seconds.


So I assume that the numbers get added to the page when it is rendered on the server side. Is this some sort of Apache plugin?

Re:Heh (0)

Anonymous Coward | more than 11 years ago | (#5309367)

Sounds like a server side include or something similar.

Re:Heh (1)

SB5 (165464) | more than 11 years ago | (#5309573)

that is kind of scary, Loaded page in .011 seconds what was like before being slashdotted 0.001 seconds?

First Spheral Solar, now StarBridge. (0)

Anonymous Coward | more than 11 years ago | (#5308874)

All the old hype is making another round. Well, that's cool. But as usual we're still left to speculate on when these supposedly consumer oriented things will ever come anywhere near the consumer market. The original StarBridge hype said they were gunning for Microsuck, but that seems off the map this time around.

Daisy Daisy... (2, Funny)

RobertTaylor (444958) | more than 11 years ago | (#5308878)

Well the good folks at NASA Langley Research Center have been making significant progress with this thing

I can see it now...

*techie smacks the machine

HAL: "I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal."

Re: Daisy Daisy... (1)

Black Parrot (19622) | more than 11 years ago | (#5309250)


> HAL: "I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal."

We've missed our window of opportunity for creating HAL. If we started today, once he obtained basic sentience he'd waste all his time trolling Slashdot instead of doing his homework, and never pass his qualifications for flying a spaceship.

Re: Daisy Daisy... (0)

Anonymous Coward | more than 11 years ago | (#5309544)

Heheheheh. This thread is way underrated. LOL ;[)

That shit is so gay (-1)

Anonymous Coward | more than 11 years ago | (#5308885)

Fucking 80's style

Re:That shit is so gay (-1, Flamebait)

Anonymous Coward | more than 11 years ago | (#5308938)

Before you deride anything as gay, I totally suggest you do one of the following:

1) Fuck a man's ass
2) Get a blowjob from a man
3) Have your ass licked by a man
Optional:
4) Suck a dick, with out without swallowing
5) Get fucked by a man

I think if you do any one of those, you will find that calling something gay is in fact a compliment. Man on man sex is some of the greatest sex you can have. Much better than any coffee can cunt. Remember, a man knows what feels best for himself and for you.

Well, a working Starbridge would be cool... (1)

C A S S I E L (16009) | more than 11 years ago | (#5308891)

...but getting hold of raw Naqahdah must be difficult, and I don't think anyone's managed to get a machine booted beyond Chevron Six.

More seriously, the programming language for this smells a bit snake-oilish, as do most parallel programming languages, especially those touted by hardware companies. (Occam anyone?)

Re:Well, a working Starbridge would be cool... (1)

Buzz_Litebeer (539463) | more than 11 years ago | (#5309117)

Actually the most recent starbridge cult was destroyed along with rubra in the uprising of the undead on the habitat vasilisk. (reality dysfunction kinda obscure)

Dont be dissing occam (0)

Anonymous Coward | more than 11 years ago | (#5309368)

Bah ... software developers simply dont know what is good for them, look what they chose. C, the foremost reason for instability and insecurity of software in general today.

Occam has been improved over the time, mobile data types were a huge improvement. If the language had prototype based OO and a better toolchain I think it would have a chance at success.

Occam is still one of the only very few remotely practical languages which can be guarantueed to be aliasing free (fortran for instance cant, the compiler simply assumes it is) and one for which automatic analysis of programs for deadlock/race/livelock/etc conditions is much easier than for most other languages (ties in with the aliasing).

Consumer usage (2, Insightful)

W0lphAU (643363) | more than 11 years ago | (#5308893)

I may be wau off here but it seems to me if your going to market this sort of product for the consumer market the point to emphasize would be the potential to pump out out millions of dirt cheap little processors exactly the same in manufacturing. and then apply the relevant "program" and turn them into a control chip for a coffeemaker or alarm clock or whatever

Re:Consumer usage (1)

starm_ (573321) | more than 11 years ago | (#5309704)

That would be a huge waste of FPGA technology. You can use a 2$ microcontroler that contains a 15MHz CPU to control smart consumer electronics. The chips can be programmed in C. motorola [motorola.com] You don't need specific chips anymore. FPGA implements massive parallelism. In what consumer electronics do you need massive parallelism? The 2$ microcontroler will be a lot easyer to program since it only does 1 instruction at a time and you don't need to worry about reconfiguring your chip hardware and sincronizing the different parts of the it like in an FPGA.
The only place in consumer electronics where an FPGA would be usefull would be in application where space is critical like in PDA, handhelds... There the FPGA could be reprogrammed to be used as different periferals. For example if you need a sound card voila the chip transform into one. Then later you need a modem and again it is programmed into the chip. It would save space by having one chip transforming into different chips. But I'm not even sure the gain would be that big compared to having one standard chip that contains, video card/modem/sound card modules that can be turned on and off.

Re:Consumer usage (1)

sketerpot (454020) | more than 11 years ago | (#5310080)

I could also be way off here, but I think that the point of this was to be able to compile programs in a high level language to actual hardware so that it would be faster and take less electricity and stuff. They added parallel computing to the mix, and now they have something really neat---if they can pull it off. For consumer electronics you use microcontrollers that you can program. They're basically little dinky computers that cost a few bucks apiece. But if you manage to have very parallel programs running on a bunch of FPGAs, that would be as if you had created the whole program using special-purpose hardware. Now if only more people were working on this, I'd get very excited.

They say... (-1)

Anonymous Coward | more than 11 years ago | (#5308894)

...IN SOVIET GERMANY, DIE actually cares about this story...

AMD Will live forever>>>>>>!

reconfigurable hype (2, Insightful)

g4dget (579145) | more than 11 years ago | (#5308899)

People have been trying to use FPGAs for general purpose computing for as long as there have been FPGAs. Reconfigurable computing turns out to be pretty hard--it's hard to program these kinds of machines.

Now, maybe someone will be able to make this go. But this company doesn't look like it. If you manage to get to their web site and look at the programming language "Viva" they have designed, it looks like you are drawing circuit diagrams. Imagine programming a complex algorithm with that.

There are already better approaches to programming FPGAs (here [clemson.edu] , here [colostate.edu] , here [berkeley.edu] ). Look for "reconfigurable computing" on Google and browse around.

FPGA experiences (5, Informative)

goombah99 (560566) | more than 11 years ago | (#5309035)

I've brushed up against reconfigurable computing engineers in various applications I've had over the years. The last one was for trying to process laser radar returns coming in at gigabits per minute so we could do real time 3-D chemical spectoscopy of the atmosphere at long range. The problem with conventional hardware was the busses were too slow and the data rate too fast too cache, and too much to archive on disk. you could not effieicently break the task into multiple CPU since just transfering the information from one memory system to the next would become the bottleneck, breaking the system.

FPGAs worked pretty well here because they could handle the fire hose data rate from front to back. Their final output was a small nuumber of processed bytes so that could then go to a normal computer for display and storage.

the problems the engnieers had was two fold. first in the early chips there were barely enough gates to do the job. and in the later ones form xylinx there were plenty of transistors but they were really hard to design properly. the systems got into race conditions were you had to use software to figure out the dynamic proerties fo the chip to see if two signals would arrive at the next gate in time to produce a stable response. you had to worry where on the chip two signals were coming from. it was ugly and either you accepted instability or failed prootypes or you put in extra gates to handle synchronization--which slowed the system down, and caused you to waste precious gates.

still my impression at the time was WOW. here is something that is going to work, its just a matter of getting better hardware compilers. Since then Los Alamos has written a C compiler that compiles C to hardware and takes into account all these details it used to take a team of highly experienced engineers/artists to solve.

Also someone leaked a project going on at National Instruments that really lit up my interest in this. I don't know what ever became of it, maybe nothing. but the idea was this. National instruments makes a product called "labview" which is a graphics based programming language whose architechute is based on "data flows" rather than procedural programming. in data flows, objects emitt and receive data asynchronously. when an object detects that all of its inputs are valid data it fires, does its computation (which might be procedural in itself, or it might be a hierarchy of data flow subroutines hidden inside the black box of the object) and emitts its results as they become valid. there are no "variables" per se just wires that distriuted emitted data flows to other waiting objects. the nice thing about this language is that its wonderful for instumentation and data collection, since you dont alwayd know when data will become available or in what order it will arrive from different sensors. Also there is no such thing as a syntax error, since its all graphical wiringing, no typiing, thus it is very safe for industrial control of dangerous instruments.

anyhow the idea was that each of these "objects" could be dynamically blown onto an FPGA. each would be a small enough computation that it would not have design complications like race conditions and all the objects would be self timed with asyncronous data flows.

THe current state of the art seems to be that no one is widely using the C-code or the Flow control languages. instead they are still using these hideous dynamical modelling, languages that dont meet the needs of programmers because they require to much knowledge of the hardware. I dont know why. maybe they are just too new.

However these things are not a panacea. For example, recently I went to the FPGA engineers here with a problem in molecular modeling of proteins. I wanted to see if they could put my fortran program onto an fpga chip. the could not, because 1) there was too much stored data required and 2) there was not enough room for the whole algorithm. So I thought well maybe they could put some of the slow steps on to the fpga chip. for example, given a list of 1000 atom coordinates, return all 1 million pair wise distances. This too proved incompatible for a different reason. When these fpga chips are connected to a computer system the bottleneck of getting data into and out of them is generally worse than that of a cpu (most commerical units are on PCMCIA slots or the PCI bus). thus the proposed calculation would be much faster on a ordinary microporcessor since most of the time is spent on reads and writes to memory.! there was however one way they could do it faster and that was to pipeline the calculations say 100 or 1000 fold deep. so that you ask for the answer for one array, and then go pick up the answer to the array you asked about 1000 arrays ago. this would have complicated my program too much to be useful.

these new FPGAs are thus exciting because they are getting so large and have so much onboard storage and fast internal busses that a lot of the problems I just mentioned may vanish.

My knowlege of this is about year out of date so I apologize if some of the things I said are not quite state of the art. But I suspect it reflects the commerially avialable world

Re:FPGA experiences (1)

g4dget (579145) | more than 11 years ago | (#5309109)

Also there is no such thing as a syntax error, since its all graphical wiringing, no typiing, thus it is very safe for industrial control of dangerous instruments.

Ummm--that's kind of the equivalent of the panic glasses from the Hitchhiker's Guide to the Galaxy: they turn dark when there is anything dangerous around that might frighten you.

When you get an error in a programming language, that's a good thing: it means that the language detected something you were trying to do that doesn't make sense. Error detection isn't perfect, but it's there for a reason. If you want as little error detection as possible, program in assembly language.

FPGAs are probably one of the worst ways in which you could try to build reliable systems: they are hard to program and they lack error checking. Your best bet for building reliable systems is a very mature, simple microprocessor running a bulletproof, verified language implementation that has extensive built-in error checking and support for error recovery.

Re:FPGA experiences (0)

Anonymous Coward | more than 11 years ago | (#5310446)

In the graphical languages, connections that don't work don't get made - the 'IDE' won't allow the line to connect or component to be placed, so there's no way to make the syntax error in the first place.

My experience was with instrument control, and, sure you could still make logic errors, but you always got operational code out of your efforts. Whether the data back from the instrument made sense after you told it to 'place cart in front of horse' was another issue, but one that all programmers face, regardless the language. There aren't IDEs or compilers that flag bad algorithms.

Re:FPGA experiences (1)

g4dget (579145) | more than 11 years ago | (#5311176)

In the graphical languages, connections that don't work don't get made - the 'IDE' won't allow the line to connect or component to be placed,

And the IDE for a programming language like Java will not let you compile programs with syntax errors or type errors.

There aren't IDEs or compilers that flag bad algorithms.

But the error checking that exists for programming languages is still vastly superior than anything that exists for hardware or circuit programming: making circuits work correctly is still a lot harder than making equivalent software work correctly.

Re:FPGA experiences (1)

zangdesign (462534) | more than 11 years ago | (#5309445)

"Also someone leaked a project going on at National Instruments that really lit up my interest in this."

Labview has been available for quite some time now. It's very specialized software with almost no use in the mainstream that I can think of, but it's out there.

Eeek! (4, Insightful)

Snork Asaurus (595692) | more than 11 years ago | (#5309577)

the systems got into race conditions were you had to use software to figure out the dynamic properties of the chip to see if two signals would arrive at the next gate in time to produce a stable response

Precisely one of the reasons that I shriek in horror when I hear that some hardware was 'designed' by a clever software guy. What you describe "figure out the dynamic ... stable response" (a.k.a. timing analysis) is not done in debugging - it is part of the design from square one, and is part of proper hardware design practices.

The fact that FPGA's are "programmable" does not move their development into the domain of software engineers.

A whole spectrum of skills is required to do proper hardware design (being a good 'logician' is only one of them) and FPGA's are raw hardware not finished product like a motherboard. Timing and many other 'real-world' factors that must be considered bore the hell of many 'logicians', but are critical to a reliable design.

A frightening number of Rube Goldberg machines exist out there that were designed by people who know something of logic and nothing of hardware design. I've had to redesign several of these "works only under ideal conditions but it's brilliant" pieces of junk.

Before you dismiss me as a hardware snob, let me tell you that I have spent many years both sides of the street and have dedicated my most recent 10 years to learning the art of good software design (there was supposed to *cough* be a bigger future in it). Each requires a set of skills and abilities that do intersect, but many of which exist entirely outside of that intersection. The fact that "logic" is one of the intersecting skills does not make a good hardware designer good at software nor does it make a good software designer good at hardware.

Re:Eeek! (0)

Anonymous Coward | more than 11 years ago | (#5310580)

I agree with your sentiments.

Statements like:

"either you accepted instability or failed prootypes or you put in extra gates to handle synchronization--which slowed the system down, and caused you to waste precious gates."

are nonsense. If you have signals crossing clock domains (which just about _every_ chip that I've worked on does) then you need to do something to prevent problems caused by metastability.

And other statements that the guy makes indicate that he has no experience with static timing analysis.

This is basic stuff that every EE needs to know. I always ask questions probing out this type of knowledge in job interviews.

Re:Eeek! (0)

Anonymous Coward | more than 11 years ago | (#5310801)

Gee too bad you dont know shit about FPGAs!

relax dude (0)

Anonymous Coward | more than 11 years ago | (#5311172)

Umm did you read the post you replied too? the guy was relating his experiences as an end user, not as a designer. In fact he seems to agree with you point: design of an fpga is art that needs experts. Indeed he was lamenting was the lack of tools for joe programmer to use these things effectively.

Re:reconfigurable hype (1)

Hast (24833) | more than 11 years ago | (#5310269)

One interesting project I found a few years back is the RAW project at MIT [mit.edu] . It does pretty much the same thing but they are no longer using FPGAs. (They use chips which are similar to FPGAs but specified towards computation.) Their first prototypes used FPGAs though.

Seems like the "programming language" is similar to LabView and such schematic programming languages. (Eg in Matlab you have Simulink.) Apparently there's quite a lot of people who find that easier to work with.

Oh well, it's an interesting field. Let's just hope they don't get a bunch of ludicrous patents that stifle other research in the area.

No magic -- sorry (5, Insightful)

Anonymous Coward | more than 11 years ago | (#5308914)

For a start: chip designers everywhere use FPGA:s to prototype their designs. No magic; they are reasonably fast (but not as fast as custom designed chips), and way more expensive. Having a large array of them would indeed make it possible to run DES at a frightening speed -- but so would a mass of standard computers. The sticking point is that the collection of FPGA:s emulating a standard CPU would be way slower for any given budget for CPU:s than a custom chip (like the PII, PIII or AMD K7) -- and way more expensive.

Think about it: both Intel and AMD (and everybody else) uses FPGA:s for prototyping their chips. If it was so much more efficient, why do they not release chips whith this technology already?

As for the reprogramming component part of this design: translating from low-level code to actual chip surface (which it still is very much about) is largely a manual even for very simple circuits, largely because the available chip-compiler technologies simply aren't up to the job.

Besides, have any of you thought about the context-switch penalty of a computer that will have to reprogram its' logic for every process :)

Re:No magic -- sorry (1)

wirelessbuzzers (552513) | more than 11 years ago | (#5308946)

No magic; they are reasonably fast (but not as fast as custom designed chips), and way more expensive.. ... although, it would be really cool to run Magic (ASIC chip design software) on these things. Probably bitching fast, too, as you have a prototype board just sitting there. And you could use it to design those "custom chips" that are so much more efficient :-)

BTW, you're right, context-switch would be a bitch, probably take 10 milliseconds.

Re:No magic -- sorry (1)

PaddyM (45763) | more than 11 years ago | (#5308994)

I still think it would be cool to have a device that could play mp3s for a while, and then when a call came in, simply load in the wireless phone processor and go from there. I don't know if it would ever actually be cheaper than simply having both devices though.

Re:No magic -- sorry (5, Insightful)

seanadams.com (463190) | more than 11 years ago | (#5309050)

For a start: chip designers everywhere use FPGA:s to prototype their designs.

Xilinx/Altera would not be in business if this were the only thing people used FPGAs for. There are some things you can do in an FPGA exceptionally well, eg pumping lots of data very quickly, and doing repetitive things like encryption, compression, and DSP functions. Generally speaking, the simpler the algorithm and the more it can be parallelized, the better it will work in hardware as compared to a CPU (yes, even a 4GHz pentium might be slower per $).

As for the reprogramming component part of this design: translating from low-level code to actual chip surface (which it still is very much about) is largely a manual even for very simple circuits, largely because the available chip-compiler technologies simply aren't up to the job.

I think it's a language problem more than a limitation of the synthethis/fitting tools. VHDL and Verilog are horrific. They are designed for coding circuits, not algorithms.

Besides, have any of you thought about the context-switch penalty of a computer that will have to reprogram its' logic for every process

With today's FPGAs this is a real problem. They're designed to be loaded once, when the system starts up. What we neeed is an FPGA that can store several "pages" of configurations, and switch between them rapidly. The config would need to be writeable over a very fast interface of course.

Re:No magic -- sorry (1)

stiller (451878) | more than 11 years ago | (#5310035)

What we neeed is an FPGA that can store several "pages" of configurations, and switch between them rapidly.

I was just thinking the exact same thing. When the reconfiguring process speeds up to the point where it loses only a few cycles instead of thousands, it could speed up certain processes considerably. Suppose the FPGA would start out in a 'basic' general purpose config, while a preprocessor would scan ahead and create several circuit schemes based on the code it finds. Something leaning towards compiler based optimisation, but in real-time. This would be a tricky task, but the boost could be significant.

Re:No magic -- sorry (1)

nexthec (31732) | more than 11 years ago | (#5310560)

MIT has a project like this call PipeWrench I belive

Re:No magic -- sorry (1)

Hast (24833) | more than 11 years ago | (#5310847)

There is research going on in this field. Eg it would allow you to reconfigure part of a pipe-line while data is flowing through the chip.

This is in fact already possible, but the reconfiguration time for large parts of a chip is generally way to slow for it to be usable. But if you have a design which allows you to reconfigure only a very small part of the chip then it's doable during runtime. (Although you may need special boards to do it, I'm not sure how many developer boards actually support reconfiguration while running.)

The idea of having small premade parts is already in use by eg the RAW project at MIT. Doing runtime optimizations is probably never going to happen though because doing routing on a large FPGA can take days to complete.

Emulating CPUs with FPGAs??? A better way... (1)

olafo (155551) | more than 11 years ago | (#5309185)

Although FPGAs may be used to emulate CPUs etc., that does not maximize their potential speed and flexibility. Traditional CPUs are severely restricted to only one (or several) operations/cycle. Thus, most silicon (gates) on general-purpose CPUs is wasted during each cycle with less than 1% active/cycle. FPGAs are inherently parallel, allowing orders of magnitude more operations/cycle. You can pack applications to maximize the operations/cycle and if you exceed the 6 million gates/FPGA chip, even extend to additional FPGAs & FPGA boards. This allows tailoring FPGAs to applications in a reconfigurable way to optimize silicon use. Viva simplifies coding of large-scale applications in a 3-dimensional way (x & y screen axes plus drilling in for the 3rd dimension) which is more intuitive than traditional 1-dimensional sequential line-by-line ASCII coding. The next generation [cox.net] seem to adapt well to graphic (iconic) coding perhaps better than many of us who may have our tradition in 1-D ASCII coding.

Re:No magic -- sorry (1)

Precipitous (586992) | more than 11 years ago | (#5310019)

The introduction to this article addresses most of your points: "Iterative Matrix Equation Solver for a Reconfigurable FPGA-Based Hypercomputer" [starbridgesystems.com] . I'm certainly no expert in chip design, but what they are saying makes some sense:

Your point about speed:

"... the collection of FPGA:s emulating a standard CPU would be way slower ..."

Their point is that you aren't emulating a standard CPU. Their approach is for application that involve "Solving systems of simultaneous linear equations ...". The traditional approach is many generic CPU's in parallel.From the article:

"However, this type of parallelism is inefficient, using only a small fraction of CPU resources at any given time, while the rest of the silicon lies idle and wastes power. CPUs are designed to be general and capable of performing any function they will ever need to perform. Therefore, they contain many resources that are rarely used. In addition, the inter processor communication time required by traditional matrix equation solvers seriously limits the number of processors that may operate efficiently in parallel optimize chips is normally a long and tedious process not available or feasible to most programmers."

You argue cost:

... for any given budget for CPU:s ... and way more expensive.

The article argues that probably replace a single FPGA with a whole lot of CPU's (because it can process as much in parallel as you can cram on the chip). One could also point out that if this type of technology becomes more prevalent, higher production volumes would lower FPGA costs. I guess we'd have to see some ROI analysis - how many CPU's can they replace with an FPGA? Could you get one workstation class device to replace a cluster or mainframe? Most of their articles discuss a technology in the Proof of Concept stage - so it will be a while before we can talk about which situations it pays off to use this in.

Your third point, its hard to code FPGA's:

"...translating from low-level code to actual chip surface ... is largely a manual even for very simple circuits, largely because the available chip-compiler technologies simply aren't up to the job."

A major thrust of StarBridge systems seems to be creating easy to use and effective tools to do exactly this. Read the sections about their Viva technology. Even if it doesn't do it perfectly, it may do it good enough.

Ummmm.... (1)

shadwwulf (145057) | more than 11 years ago | (#5308918)

...Does anybody else find it remotely unnerving that NASA is working with a computer system named "HAL"?!

Arthur C. Clark might of been on to something... First the geosync. satellite, now this!?

Re:Ummmm.... (1)

olafo (155551) | more than 11 years ago | (#5309440)

Better than that, a key NASA researcher attempting to convert Cray weather codes is Dave. However, we're targeting the new Starbridge HC system with Xilinx FPGA chips with 6 million gates rather than the 62K gates of HAL.

Re:Ummmm.... (0)

Anonymous Coward | more than 11 years ago | (#5309477)

No.

I still don't entirely believe it... (3, Insightful)

wirelessbuzzers (552513) | more than 11 years ago | (#5308924)

I looked at this site several years ago, and thought, "whoa, cool idea, FPGAs would make a really fast computer." Then for two years, nothing to show for this idea. And after I programmed some FPGAs, I realized (at least partly) why: they're too slow to program. It takes on the order of milliseconds to reprogram even a moderate-sized FPGA.

And even a very large FPGA would be pretty lousy at doing SIMD, vector ops, etc. Basically, they would suck at emulating a computer's instruction set, which is (fairly well) optimized for what software actually needs to do. I can't think of many algorithms used by software today that would work much better in an FPGA, except for symmetric crypto. And if you need to do that, get an ASIC crypto chip, 10s of dollars for umpity gigs/second throughput. SPICE might also run a bit faster on these (understatement), but those types already have decent FPGA interfaces.

Furthermore, the processor programming these FPGAs must have some serious power... if you have to do many things on an FPGA at once (which you do if there are only 11 of them), you basically have to place & route on the fly, which is pretty slow.

So, I don't think that these "hypercomuters" will ever be any better than a traditional supercomputer in terms of price/performance, except for specialized applications. And even then, it won't be any better than an application specific setup. And not many people need to go back and forth between specialized tasks. (Who am I to complain of price/performance, I'm a Mac user?)

That said, if they *can* put a hypercomputer on everyone's desk for $1,000.00, more power to them!

Re:I still don't entirely believe it... (1)

g4dget (579145) | more than 11 years ago | (#5309013)

I looked at this site several years ago, and thought, "whoa, cool idea, FPGAs would make a really fast computer." Then for two years, nothing to show for this idea. And after I programmed some FPGAs, I realized (at least partly) why: they're too slow to program. It takes on the order of milliseconds to reprogram even a moderate-sized FPGA.

"This site" is hardly the forerunner in reconfigurable computing. Look for "reconfigurable computing" on Google, and you will find that academic research labs have been looking at it for as long as there have been FPGAs.

There are probably better tradeoffs than FPGAs for reconfigurable computing: rather than reconfiguring gates, it may make sense to reconfigure arithmetic circuits. There has been some work in that area. The point is that FPGAs are nice because they are commodity hardware, but they are probably a pretty suboptimal choice for reconfigurable computing.

Re:I still don't entirely believe it... (0)

Anonymous Coward | more than 11 years ago | (#5309026)

if this is running as a live CPU, why would you want to REPROGRAM it? Just reprogram like flashing a bios, the latest CPU, thats why they dont want u using FPGAs, download the latest AMD Athlon core and PROGRAM that on? I dont think so.

Re:I still don't entirely believe it... (0)

Anonymous Coward | more than 11 years ago | (#5309104)

It does not surprise me that NASA is using a Star Bridge machine to implement some very specific algorithm or class of algorithms. FPGAs can be many orders of magnitude faster than a CPU/DSP if someone figures out how to unroll the algorithm and map it onto the fabric of a FPGA. For instance, check out:

http://www.andraka.com/dsp.htm

The hype we have been hearing from Star Bridge for many years is that they claim they have some fancy new language for doing computing on FPGAs (which may be partially true) that will revolutionize everything from cellphones to supercomputers (which is silly).

Reconfigurable Server? ;) (1, Funny)

Anonymous Coward | more than 11 years ago | (#5308930)

How long will it take thier server to reconfigure (or melt)? I guess it can't handle the slashdot effect.

from the smoke-clearing-mirror-disappearing dept? (0)

Anonymous Coward | more than 11 years ago | (#5308934)

Yeah right...

Starwhatnow something about being great and living up to hype... lots of links... erm... uuh....

Should be from the "how-to-post-a-story-with-no-real-story-in-the-sto ry-dept."

etiquette, efficiency (1)

sstory (538486) | more than 11 years ago | (#5308958)

How about in the slashdot summary, we give some idea what we're talking about from now on.

New /. slogan (1)

dark-br (473115) | more than 11 years ago | (#5308967)

News for the curious. Stories that you cant really find.

The Mozilla Phoenix Browser (-1, Offtopic)

Anonymous Coward | more than 11 years ago | (#5308969)

I downloaded the Phoenix browser the other day. It's OK - but a few questions for you nerds:

-Why does it need 18M - 30M just to run?
-Why didn't they use the standard MS Windows components? Their cheap replacements are slow to respond and draw.

Otherwise it is pretty nice, aside from not being able to display all the pages (it does show the MS Developer Network pages though -- albeit, slowly).

Thanks in advance for your responses to my queries.

Crypto cracking applications? (3, Informative)

Anonymous Coward | more than 11 years ago | (#5308972)

According to this this presentation [nasa.gov] , NSA are involved with two projects.

Going from 4GLOPS in Feb'01 to 470GFLOPS in Aug'02 for ten FPGAs, that's 120 times faster in little over a year. Not bad.

Any thoughts on what this means for crypto cracking capability?

Re:Crypto cracking applications? (1)

josh crawley (537561) | more than 11 years ago | (#5310329)

Yes, and the theoretical rate of how fast the GeForce 4200 can go is 1 TFlop. Anyways, FPGA's have a lil problem: they're too slow. That's balanced by the fact they can be re-programmed (even by themselves). ASIC's are almost always faster.

This is the future of High Performance Computing (5, Interesting)

Dolphinzilla (199489) | more than 11 years ago | (#5308979)

We started using FPGA's in our HPC designs where I work several years ago - the designs are faster, more reliable, and quicker to design. StarBridges graphical development environment is a lot like another product sold by Anapolis Micro called Corefire [annapmicro.com] .
Corefire is a java based graphical (iconic)development environment for Xilinx FPGA's. It is like anything else though sometimes programming in VHDL will be a better choice, it depends on the complexity of the design and the desired end result. But all in all we probably saved at least 6 man-months of design time using Corefire.

Congratulations (0)

Anonymous Coward | more than 11 years ago | (#5308990)

you just burnt up NASA Langley Research Center.

More information (5, Informative)

olafo (155551) | more than 11 years ago | (#5308991)

More technical information is found in MAPLD Paper D1 [klabs.org] and other reports [nasa.gov] . NASA Huntsville, NSA, USAF (Eglin), University of South Carolina, George Washington University, George Mason University, San Diego Supercomputer Center, North Carolina A&T and others have StarBridge Hypercomputers they are exploring for diverse applications. The latest StarBridge HC contains Xilinx FPFAs with 6 million gates compared to the earlier HAL-Jr with only 82,000 gates. Costs are nowhere near $26 Million. NASA spent approx 50K for two StarBridge Systems.

If starbridge was ready with their snazzy machine, (1)

WebCowboy (196209) | more than 11 years ago | (#5309042)

... you'd thing they would use it to host their web page [starbridgesystems.com] har har!

Of course, the slashdotting it is starting to succomb to might be because they spent do much on developing the machine that they could only afford hosting off a single little DSL connection. After all, they certainly haven't spend much on PR either as they do not garner many search hits on the net or widespread press...

adaptive computing has great promise (3, Informative)

KingPrad (518495) | more than 11 years ago | (#5309055)

There is a lot of work being done with adaptive computing involving a combination of a general CPU and an FPGA. The CPU takes care of normal work, but processing-intensive repetitive tasks can be pushed onto the FPGA. The FPGA is basically reconfigured as needed into a dedicated signal processor which can churn through a set of complex instructions in a single step rather than a few dozen clock cycles on a general purpose CPU.

The way it works then is that a board is made with a normal CPU and an FPGA next to it. At program compile time a special compiler determines which algorithms would bog down the processor and develops a single-cycle hardware solution for the FPGA. That information then becomes part of the program binary so at load time the FPGA is so configured and when necessary it processes all that information leaving the CPU free. The FPGA can of course be reconfigured several times during the program, the point being to adapt as necessary. The time to reconfigure the FPGA is unimportant when running a long program doing scientific calculations and such.

It's a pretty nifty system. Some researches have working compilers and they have found 6-50x speedup with many operations. The program won't speed up that much of course, but it leaves the main CPU more free when running repetitive scientific and graphics programs.

You can find information in the IEEE archives or search google for 'adaptive computing'. It's a neat area with a lot of promise.

4 IBM PowerPCs onboard a Xilinx FPGA (1)

olafo (155551) | more than 11 years ago | (#5309513)

How about 4 IBM PowerPCs onboard a large Xilinx FPGA [xilinx.com] chip. That allows significant flexibility. The tradeoff is just how parallel is your application. Some prefer more silicon for reconfigurable gates to the fixed CPUs on-board that may not maximize silicon use/cycle during an application.

Reconfigurable vs Vector (1, Informative)

zymano (581466) | more than 11 years ago | (#5309262)

From what i have read about reconfigurable chips at Sci American and other websites is that while they can do wonder for certain applications they still can't match the wiring of a 'vector processor' . Vector chips are very efficient. I have always wondered why they industry has turned it's back on them. The linux/intel solution is not as efficient as everyone thinks. Too much heat and networking the chips has it's difficulties. The japanese nec vector supercomputer is way ahead now of USa. If you don't believe me then go here and learn what top US scientists say , good article. Go down 3 articles . NewsFactor Portal [newsfactor.com]

Re:Reconfigurable vs Vector (1)

Hast (24833) | more than 11 years ago | (#5310866)

Vector processors are very much in use today. All current processors support in in some way through SSE and similar instructions, and the G4 AltiVec has a lot more to offer in the same area. Furthermore if you have a reasonably current graphics card then it uses vector processing as well.

Nobody said vector processors are dead. They just tend to be overkill for most applications. (And hence they are instead used as a type of co-processors.)

yippy skip... (0)

Anonymous Coward | more than 11 years ago | (#5309575)

Didn't NASA discover that there was once life on Mars? This pales by comparison. They ought to think of better ploys for getting more funding.

If NASA sez it (0)

Anonymous Coward | more than 11 years ago | (#5309679)

Yup, that's the way to dispel rumours of hoax and/or conspiracy, all right. Invoke the mention of NASA.

chinese fpga processor (3, Interesting)

OoSync (444928) | more than 11 years ago | (#5309847)

I cannot remember the name of the project, but two years ago a Chinese group published a paper where they used a Xilinx fpga on a custom circuit plugged into a pc SDRAM slot. The idea was to limit the communication bottlenecks of other pc busses and also to present a simple way to communicate with the fpga. All in/output to/from the fpga was done with simple mmap() routines. Their test application was a DES code breaker that could run as fast as the the memory subsystem could take it. Exciting stuff. And it has to be said: I wish I had a Beowulf cluster with these.

Is it just me or...... (1)

abhikhurana (325468) | more than 11 years ago | (#5310211)

Was the paper too simple? I mean I was able to understand every single thing in the paper, as if the paer was written for a layman. Since when did NASA guys start writing papers like that? But anyway, the other day I was discussing the use of FPGA in an embedded system and we reached the conclusion that an FPGA consumes too mcuh power to implement a whole CPU out of it. That said, an FPGA is surely an interesing concept for specialized computation as the article mentions. But this is very much limited by how frequently can u reprogramme the FPGA and if I am not wrong, this is not a very fast process, not comprabale to the number of instructions executed by a modern day CPU. So effectively, massively paralled applications are fine but if U have to reprogramme your FPAS too often then you wont get the proposed performance boost.

FPGA problems (4, Interesting)

wumpus2112 (649012) | more than 11 years ago | (#5310216)

FPGAs are great for complex control logic in hardware. They can also be used for DSP functions. Using FPGAs for general purpose computing efficiently is difficult. You essentially start out with a 100:1 handicap against a commodity CPU (this comes from the amount of transitors per gate, using VHDL vs. custom design, and the way routing wires have to be universal). Programming such a beast becomes an exersize in finding huge amounts of parallelism that don't require memory accesses (FPGAs have limited RAM on board, but don't expect much, and you have to share accross 100s of functions). Supposedly there are C to hardware complilers out there, but I can't see Joe Software designer chugging out code that carefully checks to see how every line affects the clock rate (remember that: in software you have 10% of the code executed 90% of the time, in harware you have 100% of the code executed 100% of the time. The FPGA can only clock as fast as the slowest path). The economics are probably the worst problem. These sort of things are most likely to go into government or military instalations where the contract says hardware has to do (impossible thing), and be maintained for x years. The device gets made, the customer changes the requirements, spin repeat until it ships as an expensive system that a simple desktop could do by shipdate. If you build a beowulf to do something, with a little forsight you can turn around in two years and double the power. With an FPGA design, you may be able to buy an off-the-shelf board that has advanced (if the company is still in buisness, not guarenteed), but then you have to dig into the source and modify for the new chips. This gets expensive fast.

VLIW / Reconf. computing (1)

nimrod_me (650667) | more than 11 years ago | (#5310267)

While FPGAs are definitely very useful for specific problems they are way too difficult to use for replacing a general purpose CPU. At the company I work for we use FPGAs for prototyping our ASIC. Each such Xilinx FPGA costs about $1000! Sure, it beats the hell out of any DSP but it takes an entire development team and a substantial amount of time to design the FPGA equivalent of whatever algorithm you're trying to implement! It would be nice if we had a C->FPGA compiler but... just take a look at how much difficulty Intel is having with their VLIW Itanic! Good parallelizing compilers are still a good research project rather than a proven product. Nonetheless, I would love to get my hands on that machine...

Hype or hoax (1)

Junks Jerzey (54586) | more than 11 years ago | (#5310313)

These days anything that isn't related to Linux or Windows or a new video card is considered hype or hoax. It's sad how close minded and unexciting computers have become. We have huge debates about whether X11 should be retired, which dates from 1984, and the end result is always "well, we've gotten this far with it so let's keep going." And so it goes.

Visual Dataflow Language (0)

Anonymous Coward | more than 11 years ago | (#5310453)

Here's a link apparently showing how you write the factorial fn in the visual dataflow language: http://hummer.larc.nasa.gov/acmbexternal/Personnel /Storaasli/images/factorial.bmp

Deja vu (1)

ShadowDrake (588020) | more than 11 years ago | (#5310484)

Hmm... computing platform heavily dependent on runtime-configurable FPGAs. Doesn't that sound like the Commodore-One (MSRP 250 Euros)?
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?