Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

New Way to Patch Defective Hardware

Zonk posted more than 7 years ago | from the small-size-different-angle dept.

Bug 238

brunascle writes "Researchers have devised a new way to patch hardware. By treating a computer chip more like software than hardware, Josep Torrellas, a computer science professor from the University of Illinois at Urbana Champaign, believes we will be able to fix defective hardware by a applying a patch, similar to the way defective software is handled. His system, dubbed Phoenix, consists of a standard semiconductor device called a field programmable gate array (FPGA). Although generally slower than their application-specific integrated circuit counterparts, FPGAs have the advantage of being able to be modified post-production. Defects found on a Phoenix-enabled chip could be resolved by downloading a patch and applying it to the hardware. Torrellas believes this would give chips a shorter time to market, saying "If they know that they could fix the problems later on, they could beat the competition to market.""

cancel ×

238 comments

Sorry! There are no comments related to the filter you selected.

what? (5, Informative)

Anonymous Coward | more than 7 years ago | (#18683559)

I'm not sure I see what this guy is doing that is novel. I can't tell if it's a stupid writeup or if this guy really thinks sending out a new bitstream to an FPGA is a breakthrough. FPGAs are remarkable pieces of hardware, and depending on how much you're willing to spend they can run up to a few hundred megahertz- though timing problems can be difficult to resolve at that kind of speed. Many ASIC designers use FGPAs in house to prototype and can afford to spend up to $25,000 for a single chip (only the craziest number of gates cost that much) but which reduces the number of million dollar ASIC production runs. The other reason you don't see a whole lot of FPGAs in closed source hardware is because an end user/hacker could make the hardware go out of spec or do something unintended and then expect warranty support. An increasing number of open source hardware projects (Universal Software Radio Peripheral, or USRP, for one) include FPGAs however. Anyway, bottom line is I just don't see from the article at least what this guy is doing that is so special. The article makes it sound like the chip can detect the errors itself but then requires a patch to be uploaded. It sounds to me like he's adding logic that works around certain hardware states in the fixed portions of the circuit- but that's just updating the VHDL/Verilog and creating a new bitstream. So again, I don't know if it's a dumb article or a dumb researcher. Anyone have more information?

Re:what? (3, Informative)

Akaihiryuu (786040) | more than 7 years ago | (#18683699)

I can't tell if the stupidity is in the article writer (most likely) or the researcher. But yeah, FPGA's are NOT new technology, they've been around for a long time. They are definitely useful in development, but an FPGA used to design a part is orders of magnitude slower than the ASIC that they produce from the design. I can see them being useful in very limited applications, but if the article writer or researcher thinks that we'll be replacing our CPU's or GPU's with FPGA's anytime soon they're pretty dumb.

"regular" programmable logic (3, Insightful)

nurb432 (527695) | more than 7 years ago | (#18684089)

Predates FPGAs by decades.. Sure they have advanced things greatly, but where the hell has this guy been the last 30 or so years? Under a rock?

Personally I was using proms as rudmentary programmable logic 20 years ago.

d00dz: shut up (3, Funny)

smitty_one_each (243267) | more than 7 years ago | (#18684203)

We've got the USPTO convinced that "Prior Art" is just paintings by a moderately famous black comedian with a penchant for potty-mouth.
Don't screw this up, m'kay?

Re:"regular" programmable logic (0)

Grishnakh (216268) | more than 7 years ago | (#18684235)

Yes, you're talking about PLAs, PALs, and CPLDs. However, there's something fundamentally different about FPGAs that separates them from these other devices: FPGAs are programmed by an external ROM on power-up, so it's extremely easy to "reprogram" them by simply changing the ROM code just like you would with any firmware update (re-flashing, etc.). Those other devices must be physically removed from the circuit and reprogrammed in a special programming device.

In a lab environment, this isn't that important. But when you're dealing with devices deployed in the field to end-users, this makes the difference between doing physical returns/exchanges and just sending out a software update.

Re:what? (-1)

MyDixieWrecked (548719) | more than 7 years ago | (#18684173)

yeah, I first heard of FPGAs when DarkFader was first hacking the NintendoDS to run homebrew code, and my friend would use them as a pass-thru device when hacking roms on various other electronic devices.

Re:what? (3, Informative)

Anonymous Coward | more than 7 years ago | (#18684403)

"I can't tell if the stupidity is in the article writer (most likely) or the researcher."

Wrong. I believe that the stupidity is in the Slashdot readers. Dr. Torrellas published this in Micro-39, which means that the paper has been floating around the internet for around 4-6 months. You should assume that article writers are going to screw up the details. Go read the paper yourself. Here's a link:

http://iacoma.cs.uiuc.edu/iacoma-papers/micro06_ph oenix.pdf [uiuc.edu]

Then, if you feel so inclined, go read other modern papers to see exactly what the other new things on the horizon are. Go to Google, search on some computer architecture conferences (ISCA, MICRO, HPCA, ASPLOS, etc), then read the papers listed in the conference proceedings. You can find them using most search engines (Try Google Scholar).

Then, after reading the source, come to Slashdot and bitch intelligently. After all, if you don't -- aren't you just gossiping about your friend's brother's sister sleeping with that other guy with the gross body odor? Yeah right, like she'd hit that.

Re:what? (2, Insightful)

CuriHP (741480) | more than 7 years ago | (#18683719)

That was basically my read as well. It sounds like there may be something interesting in the automatic error detection, but the writeup is much too vague to be useful.

I really don't see this going anywhere in the near future simply because of cost. You've just taken a $10 ASIC and replaced it with a $600 FPGA. ASICs may cost more than FPGAs in upfront design costs, but it you're going to use more than a thousand and can wait the extra few months, it's always going to be cheaper. Big FPGAs are expensive.

Re:what? (1)

WheelDweller (108946) | more than 7 years ago | (#18683775)

Yeah, I have to agree; I was programming 1702's to act as address decoders WAY back in 1978- probably the lowest-rung of programmable hardware. These devices are great, but hardly a breakthough.

Re:what? (4, Informative)

AaronW (33736) | more than 7 years ago | (#18683887)

You would be surprised how widespread FPGAs are. They are used in consumer devices to a limited extent. In high-end hardware where cost is not as much of an issue and the volume is lower FPGAs are very common. I know networking hardware has been using FPGAs for at least a decade, and most enterprise networking equipment I see has them. They are common in higher-end routers and other devices.

FPGAs also have come down significantly in cost while increasing their gate counts. A number of FPGA vendors also offer services where you can go straight from an FPGA to an ASIC at a much lower cost than a full custom ASIC design. Start looking inside consumer devices... look for chips that say Xilinx, Altera, Lattice, Actel and more. Some of these companies also make regular ASICs, but many of the parts you see are FPGAs.

FPGAs are nothing new, though it is not so common for consumer devices to be upgraded in the field as it is for higher-end devices.

Re:what? (1)

d3matt (864260) | more than 7 years ago | (#18684001)

Yea, we use a lot of FGPAs in our inhouse product. They're nice because you can completely change functionality on the fly or fix bugs on the rare occasion the hardware is at fault.

Re:what? (2, Insightful)

SnowZero (92219) | more than 7 years ago | (#18684401)

It all about volume. If you're only making 1-1000 of something, then an FPGA is way cheaper than an ASIC. High end devices often have low volumes (per revision), but even a low end device makes sense with an FPGA if you aren't selling that many of them. For the in-house robotics projects that are being done in my lab, they are indispensable since they can be used for replacing small logic chips and most of the glue logic; It's hard to beat an ARM chip with an FPGA next to it :)

So, he's discovered the FPGA? (4, Insightful)

seanadams.com (463190) | more than 7 years ago | (#18683561)

Don't bother reading TFA, there is no more information there than what's in the summary. Just some additional hand waving about how this enabling technology will magically detect and fix hardware bugs.

I'm sure the professor has developed _something_, but the article sure doesn't give any clue what it might be. This story is nothing more than an exceptionally poor description of what any FPGA can do.

Re:So, he's discovered the FPGA? (0, Redundant)

Anonymous Coward | more than 7 years ago | (#18683619)

KILL THIS article. FPGA aren't new they were invented in 1984. See http://www.xilinx.com/company/history.htm [xilinx.com]

Re:So, he's discovered the FPGA? (3, Informative)

treeves (963993) | more than 7 years ago | (#18684243)

But they're not just talking about FPGAs. It's not the article you want to kill, it might be the summary, which says that the whole idea is just an FPGA.

If you RTFA, you see that they're talking about adding a field-programmable unit into a traditional CPU, like IBM 750FX or AMD Athlon 64, to allow hardware bug repair - something they couldn't have done in the case of the Pentium III FP division bug, for example.

Re:So, he's discovered the FPGA? (5, Insightful)

alx5000 (896642) | more than 7 years ago | (#18683755)

Imagine for a moment that this guy has invented something new. Imagine, as the last line of the summary suggests, that "If they know that they could fix the problems later on, they could beat the competition to market."

Sounds like the hardware version of Windows. Every user would be a beta tester. Your phone calls your friends in the middle of the night and makes strange noises? It's ok, we'll fix it soon. Meanwhile remember we were the first to offer scheduled calls for cell phones!

Re:So, he's discovered the FPGA? (1)

Daengbo (523424) | more than 7 years ago | (#18684185)

Exactly my thought. We've been living with Beta software released as final for some time now (and with much pain). Now this guy wants us to deal with Beta level hardware, too? Sound like it's good for management and bad for both customers and engineers.

I appreciate firmware updates that add features or fix non-obvious bugs, but increasing the number of problems we already have with software, firmware, and drivers by adding hardware to the list of incomplete implementations seems like a step in the wrong direction.

It's a new way to use FPGA technology (4, Interesting)

mbessey (304651) | more than 7 years ago | (#18683819)

The article IS light on details, but the last paragraph does explain how the system would work. Basically, manufacturers of mass-market chips would provide a small amount of FPGA-like programmable logic in every chip they make. This programmable logic would sit idle until some defect was discovered in the chip.

At that point, you can send a "patch" to the chip that uses the programmable logic to detect the error condition (or conditions that trigger the error), and work around the problem.

It's fairly clever, and is similar in spirit to the microcode patches that varios x86 CPU manufacturers use to correct for errors in their chips after they're taped out. It would be interesting to read about what the actual design is. It seems like coming up with a generic logic patching mechanism that can deal with previously-unknown errors would be a pretty interesting task.

Re:It's a new way to use FPGA technology (1)

dhasenan (758719) | more than 7 years ago | (#18683853)

And it would be difficult and expensive enough that the manufacturers would still subject their products to thorough testing. And probably expensive enough that it'll only be used in high-confidence operations, such as NASA hardware.

Re:It's a new way to use FPGA technology (1)

whoever57 (658626) | more than 7 years ago | (#18683989)

And it would be difficult and expensive enough that the manufacturers would still subject their products to thorough testing. And probably expensive enough that it'll only be used in high-confidence operations, such as NASA hardware.
You are confusing design faults with manufacturing faults. The proposed idea affects only design faults and, if I read it correctly, has some kind of signature recognition logic that recognises the situation in which the chip will produce a faulty result and then must somehow force the chip to bypass it (interrupt the processing). I would expect that this would have to be accompanied with software that is able to work with the result of the "bypass" result.

It's a new way to use PDFs (0)

Anonymous Coward | more than 7 years ago | (#18684103)

The latest news... from 1984... (2, Interesting)

feepness (543479) | more than 7 years ago | (#18683567)

From the wiki link:

The historical roots of FPGAs are in complex programmable logic devices (CPLDs) of the early to mid 1980s. Ross Freeman, Xilinx co-founder, invented the field programmable gate array in 1984.

Umm, ok. Did you mean old way to patch defective hardware?

MOD PARENT UP. (0)

Anonymous Coward | more than 7 years ago | (#18683635)

Yeah, so if I had mod points...

The "Field Programmable" bit of FPGA describes exactly the idea that a Gate Array (hardware logic chip) can be programmed in the field, ie, not in the factory.

Re:The latest news... from 1984... (3, Funny)

vivaoporto (1064484) | more than 7 years ago | (#18683655)

But you don't get it. This is news because it is a new way to Patch Defective Hardware ... in space!!! [slashdot.org]

No Joke .. bogus story (0)

Anonymous Coward | more than 7 years ago | (#18683683)

No joke, any EE knew about this long before this stupid story.
Christ, I knew about FPGAs back in 1992 as an undergrad ... which implies that they were well know to the industry before that. And, it was blindingly obvious that this was possible then ... 15 years ago. In fact, we had labs where you did this - it wasn't the point of the lab but FPGAs were and "fast fixing" as we called it got you out of the lab quicker.

News for nerds ... apparently not hardware nerds.

WTF? (4, Insightful)

PhxBlue (562201) | more than 7 years ago | (#18683569)

So we'd get to have these chips in PCs sooner, and in return, they'd be less reliable? No thanks. One Pentium floating-point problem was bad enough.

Re:WTF? (1)

Loconut1389 (455297) | more than 7 years ago | (#18683615)

Even as stupid as the article was- it stated clearly that it enables hardware designers to fix bugs in the field (that's the Field Programmable part of FPGA)- the error would otherwise have been fixed in silicon (ASIC). Most people don't use FPGAs as anything more than a microprocessor- if you want to do more, they usually use something like a Virtex Pro which has an onboard power-pc in silicon.

Re:WTF? (1)

Loconut1389 (455297) | more than 7 years ago | (#18683639)

err I meant to say that the article also talks about self-healing processors- but (most people don't use fpgas.....(snip)... in silicon). What he's talking about isn't even self-healing, its just a new bitstream.

The way it was written it sounded like to unrelated thoughts.

Re:WTF? (2, Insightful)

Loconut1389 (455297) | more than 7 years ago | (#18683829)

maybe I was thinking too much like someone who does this for a living- responsible designers test out with FPGAs as much as they would with ASICs- perhaps even more so since they can get in more revisions before 'printing' the ASICs. If theres a teeny bug in the 'release candidate' ASIC, are you really going to spend another million to fix it? At least with FPGAs you can fix it before you send it out to customers. Most designers should be using FPGAs that way (even if not for prototyping, but for actual in-circuit use). The article does suggest releasing things preemptively, but I think that was an add-on by the writer, not the researcher. If anything, I'd give credit and say that in an instance where you know there's one bug left that will only affect 1/1,000,000 users, but it'll take a couple more weeks to fix, you could push out the hardware and send out the update when its ready and hope no one hits that bug.

No more so than software (1)

EmbeddedJanitor (597831) | more than 7 years ago | (#18683677)

That's a bit like saying that software is crap because we can update it and to get good software we should ban software patching.

Actually, FPGA patches are sometimes done to fix bugs, but more often they're done to change functionality. eg. a new firmware download uses different DSP algorithms or whatever and thus needs different FPGA algorithms to work properly. THus both get updated.

Re:No more so than software (0)

Anonymous Coward | more than 7 years ago | (#18683723)

That's a bit like saying that software is crap because we can update it and to get good software we should ban software patching.

No, it's crap because "first to market" trumps "doesn't fall over". The thrust of TFA was to trumpet the first-to-market possibilities.

FPGA does help first to market, and good design (2, Interesting)

EmbeddedJanitor (597831) | more than 7 years ago | (#18683991)

ASICs cost a lot to make and take a very long time. Much of this is due to the long turnaround times to do any verification and the costs associated with revisions.

FPGA revisions are a lot cheaper (almost free) and thus it is very much faster to verify designs and release a product. Tha significantly improves time to market.

Buggy non-FPGA hardware is typically released because of the long design/test loop in hardware resulting in people releasing hardware when it is "good enough". Speeding that up by using FPGAs can actually help good design and improve first-release quality. On top of that, it also gives the flexibility to change things later.

Re:WTF? (2, Insightful)

feed_me_cereal (452042) | more than 7 years ago | (#18683811)

that's not the only problem... imagine what the virus writers will do with this one!

Re:WTF? (1)

ThosLives (686517) | more than 7 years ago | (#18684239)

This is exactly the reason why I will never, ever, ever want any hardware that is more "soft" than my own flesh and blood. That has enough of its own susceptibility to viruses, bacteria, getting smashed, getting clobbered by radiation, etc. for me!

Re:WTF? (1)

Watson Ladd (955755) | more than 7 years ago | (#18684263)

Not much. Virus writers have not been as nasty as they used to be in terms of payloads. No-one flashes BIOS anymore.

Re:WTF? (2, Funny)

noidentity (188756) | more than 7 years ago | (#18683883)

Yeah really, we all know how much more reliable software is compared to currently hard-to-patch hardware. I just can't wait until we have patchable atoms. "Sorry, we've just found that the new-fangled carbon atoms making up all 2032 cars will self-destruct in one week. Please install this new patch, which will take a day to complete transmutation."

Console Titles vs. PC Games (0)

Anonymous Coward | more than 7 years ago | (#18684083)

Remember how console titles generally weren't very buggy? You know, they mostly worked right*, but then came PC titles and crashes became normal? Yeah, they'd have a shorter time to market, but I have this feeling they'd stop bothering with the fixes when it was no longer "cost effective" ...

* Yes, there were still bugs, and even multiple revisions of games--have Realm sketch an invisible enemy sometime. But I'd never have seen most bugs if there weren't tool assisted speedruns.

If the quality doesn't suffer (1)

RedElf (249078) | more than 7 years ago | (#18683581)

If the initial quality doesn't suffer because of this, then it will prove to be a nice update to hardware.

But, quality _will_ suffer... (2, Insightful)

msauve (701917) | more than 7 years ago | (#18683715)

"If they know that they could fix the problems later on, they could beat the competition to market."
So now, consumers will be providing beta testing services for the hardware, in addition to the software.

Re:But, quality _will_ suffer... (2, Funny)

ozphx (1061292) | more than 7 years ago | (#18683879)

>> "If they know that they could fix the problems later on, they could beat the competition to market."

> So now, consumers will be providing beta testing services for the hardware, in addition to the software.

So what the parent is basically saying:

"Look out for the new EA Console(tm), coming soon to a store near you! Runs* all your favourite games!"

Um... (0)

Anonymous Coward | more than 7 years ago | (#18683587)

...and this is new how?

So from a customer viewpoint (5, Insightful)

JanneM (7445) | more than 7 years ago | (#18683591)

So, from a customer viewpoint, what this offers is slower, more expensive hardware that is less tested and buggier than the competitors coming down the pipeline in a month or two?

I suspect I an do without.

Re:So from a customer viewpoint (2, Informative)

horatio (127595) | more than 7 years ago | (#18683749)

I agree. I'm already (as I suspect most of /. is as well) almost constantly dealing with hardware and software that isn't production ready but "beats the competition to market". The Nvidia 680i boards ship with software that conflicts with itself, causing BSODs in XP - as confirmed by my emails with eVGA. It is one thing to ship patches for things discovered after shipping -- but I think most large corps today figure it in as a calculated risk. Don't even get me started on the steaming pile that is Vista. The Motorola SLVR (mostly the fault of Sprint I'm sure) with the horribly lagged and buggy UI. The list goes on.

Re:So from a customer viewpoint (2, Funny)

26199 (577806) | more than 7 years ago | (#18683825)

Well, it's a marketing strategy that's worked well for Microsoft.

Re:So from a customer viewpoint (0)

Anonymous Coward | more than 7 years ago | (#18684111)

What if I told you it could fix typos after you've submitted a post?

Signetics had something similar in 1979. (2, Interesting)

Archeopteryx (4648) | more than 7 years ago | (#18683603)

It was called a ROM Patch.

And isn't this the WHOLE reason for Altera and Xilinx???

Wait a sec... (5, Funny)

Kryptonian Jor-El (970056) | more than 7 years ago | (#18683607)

"If they know that they could fix the problems later on, they could beat the competition to market."

That sounds like vista to me...except for the fixing problems later on part...and the beating competition to market...
What was my point again?

Re:Wait a sec... (2, Funny)

Frank T. Lofaro Jr. (142215) | more than 7 years ago | (#18683739)

What competition is there to Vista?

Linux doesn't even come close in consuming memory and adding vulnerabilities, but it is catching up! :)

Re:Wait a sec... (1)

Kryptonian Jor-El (970056) | more than 7 years ago | (#18683841)

But OSX does! and Vista surely didn't beat that to market!

Re:Wait a sec... (1)

gleffler (540281) | more than 7 years ago | (#18684289)

Processes: 72 total, 2 running, 70 sleeping... 252 threads 19:56:37
Load Avg: 0.20, 0.32, 0.32 CPU usage: 4.8% user, 6.5% sys, 88.7% idle
SharedLibs: num = 228, resident = 65.4M code, 7.49M data, 16.0M LinkEdit
MemRegions: num = 11158, resident = 475M + 26.1M private, 311M shared
PhysMem: 276M wired, 468M active, 969M inactive, 1.67G used, 334M free
VM: 12.1G + 144M 194776(1) pageins, 0(0) pageouts


Yeah, that 89% idle and 1.3GB of free RAM definitely parallel Vista. Especially seeing how I'm running 72 processes, one of which is Firefox (which is a huge resource hog on any system.)

If you're going to troll, at least draw a proper comparison. Like KDE.

Outdated? (1)

Stevecrox (962208) | more than 7 years ago | (#18683609)

In the university lecture I was in this year on FPGA's the big selling point was the fact you could do exactly this and how its used in industry. I'm not seeing any 'news'

Re:Outdated? (1)

HTH NE1 (675604) | more than 7 years ago | (#18683649)

In the university lecture I was in this year on FPGA's the big selling point was the fact you could do exactly this and how its used in industry. I'm not seeing any 'news'
Everything old is new(s) again.

Buggy hardware AND software? (4, Interesting)

GoLGY (9296) | more than 7 years ago | (#18683611)

"If they know that they could fix the problems later on, they could beat the competition to market.""
Great, just what we need - hardware suppliers being encouraged to release buggy versions in the guise of fully working products.

Hasnt the lessons that have been learnt by the software industry had *any* impact?

Re:Buggy hardware AND software? (3, Insightful)

evought (709897) | more than 7 years ago | (#18684135)

"If they know that they could fix the problems later on, they could beat the competition to market.""
Great, just what we need - hardware suppliers being encouraged to release buggy versions in the guise of fully working products. Hasnt the lessons that have been learnt by the software industry had *any* impact?

Sure, and those lessons are being fastidiously applied here. Customers buy that buggy, undercooked software and wait for the patches. Problem is, in increasing numbers of cases, the vendors are learning that they don't have to even ship patches (e.g. game industry, commodity hardware drivers and apps) or only for a very short lifetimes.

Fast-followers usually have much better products than first-to-market vendors, and it used to be that they had better success as well. I am not sure that is always the case these days. Look also at the release of Vista and the fact that a new XP system simply cannot be purchased, locking customers into being beta testers (or getting off the platform entirely).

In some sense, this has already extended to hardware as more and more depends on firmware and flashable updates. a good portion of drivers for some hardware consists of software to offload to firmware, one of the things that makes opensource drivers a pain.

Limited useability (4, Informative)

Dynedain (141758) | more than 7 years ago | (#18683621)

Great... so I assemble a new system with "patchable" hardware... only to find that the hardware is deffective.

Now I'm left in a situation where I need software to patch the hardware. But I can't run the software because the hardware is defective...

This is just an excuse for being lazy. Do we really need more untested products flooding the market? Nothing like shifting the burden of quality control onto the end user to push up your profits...

On the other hand, this could be very useful in systems where physical access to the hardware is nigh impossibles... satellites for example. But this should not be used in consumer devices, and shouldn't be a crutch for faster development.

Oh great... (4, Insightful)

Brandybuck (704397) | more than 7 years ago | (#18683629)

"If they know that they could fix the problems later on, they could beat the competition to market."

Oh great, now we'll have hardware as crappy as software. I guess we'll have to get used to the new QA mantra: "If it solders, ship it!" Sigh.

Screw the end user (0)

Anonymous Coward | more than 7 years ago | (#18683645)

Great, just what we need. Technology to enable manufacturers to push more crap that doesn't work to market sooner so end users can become beta testers. Then they can obsolete the product by pushing even more crap to market before they ever get done patching the old crap that doesn't work. And the cycle of crap continues.

Having said that, what's so new about FPGAs?

Phoenix, eh? (2, Funny)

R.Mo_Robert (737913) | more than 7 years ago | (#18683651)

Hmm, he might want to work on changing the name from Phoenix. Good thing the summary says its only "dubbed Phoenix," not that it's the final name.

What's that you say? No, "Firebird" won't work, either...

Re:Phoenix, eh? (0)

Anonymous Coward | more than 7 years ago | (#18683953)

Hmm... it's very fox-like of him to a small bit of FPGA to correct ASIC flaws. It's sure to get companies all fired up about his product - I think the perfect name is thus FoxFire.

Re:Phoenix, eh? (2, Funny)

Godji (957148) | more than 7 years ago | (#18684279)

Heh, I got a good one: how about.... IceWeasel!!!

I guess this means... (1)

game kid (805301) | more than 7 years ago | (#18684317)

Introducing WaterOtter®—the most advanced, least well-backed technology ever created.

(Damn browser guys, always taking the good names...)

What if you can't patch (2, Interesting)

Sciros (986030) | more than 7 years ago | (#18683657)

Supposing a defect in this post-releast-modifiable hardware makes it impossible to connect to the internet? Good luck downloading a fix! :-P

This could make hardware manufacturers cut QA costs at our expense. Yay!

Re:What if you can't patch (0)

Anonymous Coward | more than 7 years ago | (#18683875)

if this were for a NIC or Modem of some sort (DSL/Cable/Telephone) sure- but no matter what, the manufacturer can send you a disc/usb dongle/whathaveyou that you can patch with (or you can download from another machine).

Patching hardware like software (1)

edbob (960004) | more than 7 years ago | (#18683663)

I see problems with this approach. First of all, with the ability to patch flaws in the future, what is to stop a manufacturer from releasing a chip with a flaw just to get to manufacturing quickly. As we've seen with software, there could be serious flaws that cause security issues. Also, malware writers could make undesirable changes to hardware and might be more difficult to track and repair the damage that is done. I am more curious about the level of security that will be used to prevent unauthorized changes to hardware. Turning hardware into software will introduce the problems of software into hardware.

Someone make him shut up (0)

Anonymous Coward | more than 7 years ago | (#18683665)

For the love of god, someone tell this man that his attitude (release now, fix later) is what has gotten us into the shithole of unreliable never finished software that we're stuck in. I hope he takes his "yay short term profits and screw everything else" attitude on a long walk off of a short pier.

If I'm missing something... (4, Informative)

user24 (854467) | more than 7 years ago | (#18683667)

If I'm missing something, then I'm sure a lot of other people are too, so please explain:

exactly what is stopping malware2.0 from killing my processor?

Re:If I'm missing something... (1)

dissy (172727) | more than 7 years ago | (#18683747)

If I'm missing something, then I'm sure a lot of other people are too, so please explain:
exactly what is stopping malware2.0 from killing my processor?


A hardware switch or button that needs to be set in program mode.

Re:If I'm missing something... (1)

nuzak (959558) | more than 7 years ago | (#18683779)

> exactly what is stopping malware2.0 from killing my processor?

Most PC's don't have recoverable bios backups, so most PC's can be all-but-bricked by malware that corrupts the bios. For most people who aren't into pulling chips, that's completely bricked.

It's an unsuccessful virus that instantly kills its host. Malware these days goes to quite some lengths to avoid notice so they can actually execute their intended purpose.

Re:If I'm missing something... (1)

user24 (854467) | more than 7 years ago | (#18683817)

"so they can actually execute their intended purpose."
what, like, killing their hosts on April the first, or something?

Re:If I'm missing something... (3, Insightful)

Frank T. Lofaro Jr. (142215) | more than 7 years ago | (#18683809)

Nothing.

And this is the reality NOW.

Erasing the BIOS, stopping fans, overclocking and overvolting chips can be done TODAY.

Also, changing the region of a DVD drive until it locks out changes and leaving it on a unwanted region is also doable; another "advantage" of this attack is that it is a felony to repair the hardware thanks to the DMCA giving DRM the force of law.

Killer POKEs didn't die with the Commodore PET and C64, they just aren't literal POKEs anymore.

Re:If I'm missing something... (1)

AcidPenguin9873 (911493) | more than 7 years ago | (#18684043)

exactly what is stopping malware2.0 from killing my processor?

What is stopping malware1.0? If you're using a modern x86 CPU (by modern I mean Pentium Pro or K6 and later), those have a microcode patch capability that can be used to modify any microcoded x86 instruction. Lots of instructions are microcoded, especially system software instructions that your OS needs. Malware could install a patch that does nothing, or locks up, whenever your processor tries to execute a MOV CR3 instruction to change page tables. Even better, it could modify the BIOS to *always* install the patch, and then your system wouldn't even boot.

Of course you'd have to know the spec for microcode patches on Intel or AMD CPUs (which is NDA protected) and then get around whatever encryption, hashing, signing, etc. that Intel or AMD uses. But, I'm sure the same security measures would be in place for these proposed hardware patches as well.

bleh (3, Interesting)

dave1g (680091) | more than 7 years ago | (#18683681)

I was hoping for some idea like slapping an X gate FPGA onto the package of a regular processor, and then if in later testing it is deemed to have a bad cache line, or floating point unit. it could be reimplemented in the FPGA section and wired in, possibly increasing yields. Though these would certainly be lower quality parts they would atleast be functionally correct, if a bit slower.

But I dont know. Something tells me that if there is a hardware problem(not a hardware design problem) then it is likly that there will be others on the same chip, due too some non uniform distribution of impure silicon. and it wouldnt be long before there are too many corrections to fit in the fpga.

Re:bleh (1)

Quixotic137 (26461) | more than 7 years ago | (#18684333)

But I dont know. Something tells me that if there is a hardware problem(not a hardware design problem) then it is likly that there will be others on the same chip, due too some non uniform distribution of impure silicon. and it wouldnt be long before there are too many corrections to fit in the fpga.
I think this would only be used to fix a design problem. Defects in silicon are spread randomly across the wafer, meaning that the fault(s) in each faulty chip are not the same. It would not be worth the effort to track down where the defect was and create new logic to avoid it just to fix one chip on the wafer.

They should call it "SoHardware" (1)

freshmayka (1043432) | more than 7 years ago | (#18683727)

You know... the merging of Software and Hardware.. SoHardware...?
But seriously this sounds like a bad idea for stuff like x86 CPUs and 3D GPUs. The whole beauty of hardware is that it's fixed. You know it won't change and so do the engineers, hence they spend more time making the systems reliable and bug-free. Self patching hardware just sounds like a bad idea for consumers.
On the other hand, it could do well for what will become legacy systems in the future. Chips that are used in machines which need to operate for 10-25 years could definitely benefit from a system like this.

Re:They should call it "SoHardware" (1)

hchaos (683337) | more than 7 years ago | (#18684007)

Actually, there's already a word for this: firmware [wikipedia.org]

Stupid post of the day. (3, Insightful)

Abuzar (732558) | more than 7 years ago | (#18683751)

Look, I'm not even a geek. I'm just some luser, and even I in my eternal stooooopidity could tell you:

a) Duh, duh, duh. This ain't no news, FPGAs have been around for quite a while, and being able to soft-repair hardware is an old idea that is being used where practical. Sheesh, slashdot is really headed for slushdot these days.

b) Great, so now we can get more defective products quicker, be on hold with tech support longer, and spend our own time/money fixing the products under warranty that we've already paid for. Someone, please shoot me!

Re:Stupid post of the day. (0)

Anonymous Coward | more than 7 years ago | (#18683973)

How'd this make it onto slashdot anyways?

Two Words (0)

Anonymous Coward | more than 7 years ago | (#18683787)

Hardware Virus.

Is this guy serious? (3, Insightful)

megla (859600) | more than 7 years ago | (#18683839)

I can't believe I'm even reading this.
The entire selling point of this system is that it allows hardware developers to do sloppy work? Great! The build-and-fix approach has worked wonders for software what with constant security alerts and all, why not use it for hardware? Inspired!

Have they put any thought into this at all?
That other people might make malicious "patches"?
That they'd be opening up hardware to all the vulnerabilities that software has?

Jesus christ people, use some common sense.

Re:Is this guy serious? (1)

Shados (741919) | more than 7 years ago | (#18683931)

Indeed. Hell, we see it all the time, being able to PATCH a software just means that it will be buggier on release, and never really get better than the unpatchable version... Thats whats semi-killed off PC gaming, since console games have always been released with less bugs than PC games have after the first 5 patches... though with consoles with harddrives and such, that might be history in a few years...

Reliability requires redundancy (0)

Anonymous Coward | more than 7 years ago | (#18683895)

Techniques like this could vastly increase chip yields. One of the limits to the complexity of chips is that it gets hard to produce very complex chips with no flaws. Being able to patch around flaws could bring a process from a 50% yield to a 99% yield. Implementing the system would, of course, be non-trivial.

An example of redundant systems would be communications and deep space satellites. It is so expensive to put the satellites into orbit that doubling the amount of some of the systems is actually not the most expensive part. For instance, if I need a mux with five channels it isn't that much more expensive to include six. It means I can still collect my planned-for revenue even if a channel fails. Otherwise, the failure of a channel would lead to a 20% drop in income. Also, my company would be viewed as less reliable and I would lose even more business on that account. Including the extra channel is almost a no-brainer.

In a sense I'm a bit surprised that nobody has thought of the application of redundancy to chip manufacture before now.

Re:Reliability requires redundancy (3, Insightful)

LarsG (31008) | more than 7 years ago | (#18684283)

In a sense I'm a bit surprised that nobody has thought of the application of redundancy to chip manufacture before now.

They already do.

RAM and Flash chips typically have a few redundant memory banks.

Graphics chips with faulty modules are sold as lower performing parts (example - the Nvidia 6800 LE and the 6800 Ultra both have the NV40 chip, but the LE has 8 pixel pipelines and 2 vertex shaders disabled).

Oh great!!! (1)

pandrijeczko (588093) | more than 7 years ago | (#18683919)

Torrellas believes this would give chips a shorter time to market, saying "If they know that they could fix the problems later on, they could beat the competition to market.""

So the P.O.S. unfinished game I just bought to run on the P.O.S. unfinished operating system I just bought is now expected to run on the P.O.S. unfinished PC I just bought...

If you've half a mind to go into marketing, that's all you need...

Re:Oh great!!! (1)

wellingj (1030460) | more than 7 years ago | (#18684327)

And if you were half of a decent consumer you wouldn't pay for P.O.S. unfinished products.

I blame the products of Microsoft, Electronic Arts and MTV on the complacency of the uneducated consumer.

Another opportunity for exploits (2, Insightful)

jobin (836958) | more than 7 years ago | (#18683949)

And what happens when someone writes a worm that modifies this to *create* errors? It could be pushed to the extent that the processor is completely crippled. A virus could really take out a system this way; you can't even start over with a clean OS install if there's no processor to do the installing. There goes your expensive new system....

no no NO! (1)

geekoid (135745) | more than 7 years ago | (#18684057)

This is bad, it means production design testing will be foisted onto the user.
We already have to worry that software products don't work and will be apllying endless patches to it, not hardware?

What about malicious attacks?

No I was to busy being alarmed to read the article.

It's like microcode... (2)

Chris Snook (872473) | more than 7 years ago | (#18684075)

...except slower and more expensive.

Spell it out: Field Programmable Gate Array (1)

viking80 (697716) | more than 7 years ago | (#18684079)

In other news:
A professor in optical systems discover that a light bulb screwed into a socket starts to emit photons.

Great! (0)

DarkLegacy (1027316) | more than 7 years ago | (#18684085)

Now Microsoft can send a software patch to our hardware through Windows Update Manager and enable DRM on the hardware level.

Excellent!

I already beta-test software for free.. (3, Insightful)

daitengu (172781) | more than 7 years ago | (#18684117)

Half of Google is in "Beta", 90% of the video games I buy are beta-quality, more and more software now-days is labeled as "beta release 3.1415", I don't need to beta-test a processor or GPU as well! While it would be nice to be able to _add_ things to your CPU, like support for SSE42, I think something like this in a CPU would cause more harm than good.

It'd also make debugging software that much harder, as you won't be sure where the problem lies, with the CPU or the software program itself.

Great (1)

JustNiz (692889) | more than 7 years ago | (#18684229)

This sucks, as companies will presume its more OK to release buggy hardware now.

Oh wow, really? (2, Interesting)

Junta (36770) | more than 7 years ago | (#18684249)

I'm surprised anyone would think this is news. Also, as it stands today, some companies have *entirely* too much faith in FPGAs to get them through. We had two companies come give us product to try. Both implementing the exact same technology, one with an FPGA design. They talked about how wonderful FPGAs are (as if they were new to them) and that they were perfect for large-scale deployment, and they could fix *anything* in firmware. During our evaluation, despite their claims of how well it performed, we contorted the tests all over the place to meet 70% of the 'theoretical' performance, with *huge* latency penalties on any given operation no matter how we sliced it. All this coming with the bonus of an abnormally large TDP of the part.

The other solution was a traditional ASIC. Under 1/4th the TDP of the competitor, a 50-fold decrease in latency per-operation, and on the first default run, got 90% of the theoretical performance, 96% after tuning. All this at a lower cost-per-part in production by about 200 dollars.

We were skeptical of both vendors for different reasons, neither vendor was allowed to give us extra hand-holding until the first vendor was so embarassingly bad we let them go hands on with us because we were *certain* we had to be missing something if it was that embarassing. Even after giving them unprecedented advantage to offset initial results, they couldn't come close to touching the other offering.

I know, a better company could have done better, probably, but the cost delta of FPGA and ASIC was not their fault, and the TDP of their parts, while likely worse than they could have done, probably would have been higher regardless. As another poster pointed out, its more difficult in general to clock up FPGAs than ASICs, and so performance will suffer.

FPGAs have their place, and a huge benefit is prototyping. I've seen a number of companies do a proof-of-concept with an FPGA, go forward with a demo, but when time comes for mass-production, it's most often implemented as an ASIC. After decades of dealing with hardware bugs, the industry at large has gotten very good at glossing over the rough spots in firmware. Sure, some hardware bugs can never be addressed in such a way, and as a consequence, your testing has to be better up front and inevitably slow down a release process due to a fear of post-release returns, but that is a *very* healthy fear to have, and it ensures the quality will be better at release time than your FPGA-reliant competitor. First to market is generally an advantage, but it is also a *huge* opportunity for embarrassment and sending your early adopters begging for your competitors competent ASIC implementation, with a low bar to beat as well.

This was news (1)

Khashishi (775369) | more than 7 years ago | (#18684269)

in 1980!

ROTFL (1)

SeaFox (739806) | more than 7 years ago | (#18684271)

Defects found on a Phoenix-enabled chip could be resolved by downloading a patch and applying it to the hardware. Torrellas believes this would give chips a shorter time to market, saying "If they know that they could fix the problems later on, they could beat the competition to market."

Oh, boy! Defective by (lack) of design... sooner!
If there's anything wrong with hardware and software development, its that there isn't enough quality testing done prior to shipping. How does this do anything but encourage products to be rushed to market even more? This also requires additional expense in adding an interface to the product to receive those updates, rather than just building it right the first time.

This is going to be LOTS of fun! (1)

DanielMarkham (765899) | more than 7 years ago | (#18684275)

You think it's tough getting tech support now? Wait until field-patchable hardware hits the market.

Can't read the screen? First you call the O/S manufacturer, then you call the video card guys, then you call the RISC chips guys, and so on.

That'll be loads of fun.

Hardware Viruses (1)

swaha (101157) | more than 7 years ago | (#18684287)

Since this is "programable" there will follow a new plague of "hardware" viruses.
How does the inventor propose to defend against such things????

Fix it later? BS!!!! (1)

TavisJohn (961472) | more than 7 years ago | (#18684303)

This is both good and bad. It would be great if a device needed an update because of a mistake. HOWEVER using this as a way to "Fix it later" so you can rush something to marked before it is ready, That is CRAP! There is enough bad software out there that they promise to fix, and never do. There are also devices that list features "Will be available soon" via software, and that never happens either! (I am referring to a video capture card that promised to have a TV guide app, but they stopped making the device after 6 months, and NEVER added the TV GUIDE!!!!)

missing the point (1)

fpgaprogrammer (1086859) | more than 7 years ago | (#18684325)

A lot of you seem to be missing the point and dismissing the article as system to allow designers to be lazy and release "faulty designs." The article was less about allowing bad designs and more about allow defective chips to work thereby increasing yield and product life.

Defect tolerant arrays are still not a new idea. Every single Playstation 3 that ships is assumed to have a defective SPU in it so that the Cell yield could be higher and thus the PS3 could be cheaper. FPGAs are an array of finer grained processing elements and thus have lower susceptibility to defects. Many people note that FPGAs suffer a performance hit to ASICs, though an FPGA is more comparable to a CPU than an ASIC since it is a general purpose device. There are many processing tasks on which an FPGA can achieve order of magnitude higher performance than a CPU due to parallelization and I/O efficiency. Some people said that the cost of FPGAs is very high. This is simply no longer the case: the Xilinx Spartan 3E chips can implement 1M gate designs for about under 10$. Higher performance FPGAs have cost factors which are largely attributable to market demand and any economic comparison with CPUs should be normalized to account for volume. Even still, if cost/gate scales linearly one could currently devise a 1 Trillion gate FPGA for $10M based on the Spartan's costs. Even if I'm a factor of 10 off on cost, it would be the most powerful supercomputer in the world at a bargain price.

Chip defect density will increase with decreasing transistor size to a point where the most economically viable chip is a defect tolerant array that can eliminate much of the semiconductor testing and yield costs.

The major missing piece is an operating system for these things to provide a simple and transferable programming model.

Oh joy. The value is I can get more buggy crap. (1)

CFD339 (795926) | more than 7 years ago | (#18684379)

So we can get things to market faster knowing we can fix the bugs in the chips later -- after the user fails to get the promised benefit.

UGH. I'm so sick this attitude that it is beyond description. ASUS makes some great gear, for example, and the worst software for that gear I've ever seen. Netgear is the same way. The crap both companies turn out is low quality and it's clear that their focus is so hardware centric that they see the software as a necessary evil that the users need in order to use the fabulous new hardware they're buying. This attitude of "Fix the software after we ship" means you can almost never rely on the software that ships with products from these companies. Unfortunately, sometimes the patches are worse. Asus in particular makes me ill with this. My P58-N-SLI motherboard is amazingly fast, but the update software is junk and the most recent bios patch was so bad I almost couldn't back down from it.

Sounds to me like this kind of chip would be a red flag for me to dump the product back on the shelf at the store.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>