Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

IBM Announces Chip Morphing Technology

CowboyNeal posted more than 10 years ago | from the self-fixing-problems dept.

IBM 118

An anonymous reader writes "IBM has announced that it is now capable of producing self-healing chips. From the article: 'eFUSE works by combining software algorithms and microscopic electrical fuses, opposed to laser fuses, to produce chips that can regulate and adapt their own actions in response to changing conditions and system demands.' It goes on to say that the IBM system is more robust than previous methods, and that the chips are already in production. The future is here!"

Sorry! There are no comments related to the filter you selected.

Overclocking made safer. (2, Interesting)

Television Set (801157) | more than 10 years ago | (#9851719)

Think about it... overheat a chip, it heals itself.

Re:Overclocking made safer. (1)

Zorilla (791636) | more than 10 years ago | (#9851731)

Think about it... overheat a chip, it heals itself.

Kinda reminds me of the remote control automotive screws discussion. Overheat a chip, destroy its ability to heal itself too, perhaps? D'oh!

Re:Overclocking made safer. (4, Interesting)

dnoyeb (547705) | more than 10 years ago | (#9851765)

Heal, lol. What did I miss? A fuse is something that interrupts a circuit permanently. Akin to gnawing off a leg.

Reading their article, the big improvement is the leg has no chance to grow back.

Sounds like total spin to claim that descruction of circuits is a healing process. I smell DRM all over this.

Re:Overclocking made safer. (2, Insightful)

phats garage (760661) | more than 10 years ago | (#9852180)

I agree.

Using fuses seems best suited for small runs where your design is pretty fixed and you don't want to foot the bill for a custom chip mask. Like programmable logic arrays, etc...

So if conditions change with the environment these chips are in, they blow some fuses to respond. If conditions change back to where they were before the chip blew fuses, oh well. Some sort of nonviolate ram seems more in order for "adaptive" technology, heck regular PC cmos adapts handily to new hard disks for instance.

It is worth noting that it seems the real breakthrough is in the actual improvement in fuse technology (from the article):

  • "In the old days, people tried to do this by basically blowing the fuse up by coursing a certain amount of current in it and causing it to rupture. The problem with that mode of opening up a fuse is that there is no place for the debris to go. So it can redeposit on the fuse and cause a previously open fuse to act like it's closed," he said.

    By avoiding the rupture, IBM claims to have perfected a technique to harnesses electromigration and uses it to program a fuse without damaging other parts of the chip.

Maybe its just the press release that slants this toward being "adaptive" technology.

Re:Overclocking made safer. (1)

lawpoop (604919) | more than 10 years ago | (#9852744)

"Sounds like total spin to claim that descruction of circuits is a healing process. I smell DRM all over this."

So DRM smells like fried electronics?

Maybe (1)

einhverfr (238914) | more than 10 years ago | (#9853021)

We would do better to call them self-amputating chips.

I doubt that DRM would be a driving factor, but I could see where a software security vulnerability might be exploitable to cause damage to the CPU.

I could also see where a kernelmode DRM driver might seek to destroy CPU's used to rip CD's etc without permission... Many questions arise from this and how technology and content providers will reach a compromise. My own personal view is that such a compromise is becoming less and less possible.

I think it is a slight improvement however, in that if you have a circuit failure, you really DO want to remove it from the active portion of the chip, and this seems to be what this does. If you only temporarily remove it, then you have the possibility that a failure in that system could cause all of the damaged materials to come back into use.

Re:Overclocking made safer. (3, Insightful)

vrmlknight (309019) | more than 10 years ago | (#9851834)

You can do that once... The main thing this helps is when there is a single failure in a production server so now when it happens you are able to schedule down time and then replace that component. Is like when you have your redundant hard drives one goes then you can replace it when you get a chance. (hopefully soon before you have another failure)

Re:Overclocking made safer. (1)

DarkOx (621550) | more than 10 years ago | (#9852666)

I think this is the most fair analogy I have seen yet after reading the article. Its sounds like the chip is just takeing the defective area out of the ciruit and working around the problem. It seems like a degraded mode on a disk array to me, sure your raid 5 array keeps running with the loss of one drive but the performance is not great.

Re:Overclocking made safer. (1)

CRYPTOFREQ (735273) | more than 10 years ago | (#9851948)

T2 is coming!!! Just everybody wait!!!

Cool (-1, Redundant)

Tajas (785666) | more than 10 years ago | (#9851720)

Just wanted to be first reply, and I think chip morphing is cool

Re:Cool (-1, Redundant)

Anonymous Coward | more than 10 years ago | (#9851745)

too bad you didn't get fp...

Sounds good to me (1, Interesting)

no1here (467578) | more than 10 years ago | (#9851721)

So what does this really mean for computers? And why design a chip? Can't it design itself. You give it all the resources it will need, tell it what to do, and it determines how to best configure itself. And then it could reconfigure itself to better adapt.

P.S. First post. :)

Re:Sounds good to me (-1, Flamebait)

Anonymous Coward | more than 10 years ago | (#9851758)

erm, not fp, try again looser

Re:Sounds good to me (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#9851783)

contributing to the nitpickiness: i'm sure you mean "loser"

Re:Sounds good to me (1)

Numair (77943) | more than 10 years ago | (#9851761)

And why design a chip? Can't it design itself.

In Soviet Russia .. Chip Design YOU!

Sorry, had to ...

Re:Sounds good to me (2, Funny)

nkh (750837) | more than 10 years ago | (#9851823)

When a web server is /.ed, will it mutate into some kind of Dawson's creek trapper keeper [] ?

Chips making chips? Now that's just stupid ;) (1)

LilJC (680315) | more than 10 years ago | (#9851943)

With all the steps you described in making a chip that designs itself, you are effectively designing the chip.

Manufacturing is another ballgame, but it's not as if humans are manufacturing chips anyway.

In terms of self-improvement, it seems this would require some AI, especially to do anything innovative (i.e. more than a load balancing maneuver).

Re:Sounds good to me (1)

beswicks (584636) | more than 10 years ago | (#9852230)

There may not be chips that design chips, but there is some funky 'computers that design chips' stuff, which i guess kinda means that chips already are designing each other...

Oh dear, we are so boned, i mean you've seen the documentory Terminator?

Erm anyway, there is some stuff where you describe the problem, and it works out what to build, although as with any abstraction, it ain't as efficent. (and it ain't quite that simple :(

Links, [] [] []

Re:Sounds good to me (0)

Anonymous Coward | more than 10 years ago | (#9852820)

This has been done, the problem with the chips were that they made themselves too specifically. If any condition in the environment changed, the chip would have problems due to it would create itself based on a exact temperature, humidity, etc....

Finally! (0)

Anonymous Coward | more than 10 years ago | (#9851722)

Finally the age of re-constructable hardware is here!

Overclocking? (0, Redundant)

phantasma6 (799340) | more than 10 years ago | (#9851723)

would these chips with eFUSE be able to be overclocked, without fear of damaging the chip?

Basis for PowerTune? (1)

Rosyna (80334) | more than 10 years ago | (#9851725)

Is this the basis for the PowerTune [] technology either used in the 970FX or to be used in it? It supposedly automatically adjusts power consumption and processor speed based on how processor intensive the current operations are.

This seems much different from the current speed stepping technologies as it doesn't scale down to a fixed MHz rating. That is, it isn't always 2.0Ghz during intensive operations and 1.2Ghz for non-intensive operations. /got nothing

Re:Basis for PowerTune? (4, Informative)

rale, the (659351) | more than 10 years ago | (#9851777)

No, its not. Theres a pretty huge difference between changing logic and simply lowering frequency by a range of dividers.

Doom3 (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#9851729)

arg.. i neeed a deeeeeeem000000 rigth now ... linux one .. let the windows fuckers eat dirt....

Obiligatory 2001 Quote... (5, Funny) (215152) | more than 10 years ago | (#9851738)

"I know you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen."

Re:Obiligatory 2001 Quote... (4, Interesting)

Pharmboy (216950) | more than 10 years ago | (#9851813)

How applicable. Nothing beats a technology that brings up images of either "2001" or "The Matrix". (insert eFuse Overlord joke here)

But on a more serious note, while this sounds pretty cool, it still breaks down to this: If a portion of the chip is screwed up, eFuse will bypass it. If you bypass part of the chip, you will have lower performance. I can see where this would be good in enterprise computing *IF* the chip also *TELLS* you that it is messed up, so if a portion of the chip becomes defective, it will still operate until it can be replaced. This would be great for uptimes and in mission critical systems, but for overclocking desktop system, this seems pretty useless, here is why:

Take a 2ghz chip. Overclock to 2.5ghz. Blow two eFuses (oops). Now chip at 2.5ghz functions as fast as a 2ghz chip. Clock back down, and it performs as fast as a 1.5ghz chip. Sell chip or system on eBay to someone without telling them eFuses are blown, screwing them over.

Unless there is a way to test if the eFuses are blown, I can see some real problems on the used market for this kind of chip. This would also apply to "why is this server performing like crap?" situations. Of course, as long as the eFuses are not blown, but are instead just reordering its own logic for specific uses (web server only, database server, etc), this would be majorly kick ass offering a quazi-specific purpose system on the fly. Especially once you have a kernel module that can talk to it and tell it what kinds of changes in routing would be best for a given platform, telling it "this computer is used for $x only, route logic accordingly".

Re:Obiligatory 2001 Quote... (1)

Hellraisr (305322) | more than 10 years ago | (#9851917)

How about if it recognized that you were running say, SETI@Home, and it optimized itself to execute that algorithm faster?

I think that's the real advantage of these kinds of things. Now you won't have just an all-purpose processor, but an all purpose processor that can specalize in the task that you are currently working on.

Wouldn't this make it seem like you have a PC designed just for what you are doing, with everything you do?

Re:Obiligatory 2001 Quote... (0)

Anonymous Coward | more than 10 years ago | (#9852210)

IBM are masters of the hardware-self-monitoring business - Their "enterprise" mainframes have basically always (i.e. for decades) been able to call "home" at IBM and order spare parts and an engineer callout. IBM engineers show up at your door before you know your AS/400 is having difficulty (unless you read the system logs yourself)

Re:Obiligatory 2001 Quote... (1)

6800 (643075) | more than 10 years ago | (#9852279)

"If you bypass part of the chip, you will have lower performance." It ain't necessarily so! As long as we are blue skying these capabilities, the small pattern sizes allow for extra hardware to be included on chips that may not even be used until placed into service by a process such as this for healing or even when purchasing extra/future performance boosts!

Re:Obiligatory 2001 Quote... (0)

Anonymous Coward | more than 10 years ago | (#9852564)

The fuses don't 'blow' in a typical sense, there is no contact that is vaporized by excessive current. Rather the fuses act like some sort of switch, that IBM claims can be set to open by controlled electromigration. I don't know the details of the process, but as there is no physically blown switch I would assume it restores to normal on removal of power, and then re-adjusts its fuses as necessary on next power on....Sounds good to me.

Awesome (1, Insightful)

Stevyn (691306) | more than 10 years ago | (#9851742)

This sounds like an innovation above and beyond upping the clock speed and making a bigger heatsink. Take that pentium!

Re:Awesome (1)

remin8 (791979) | more than 10 years ago | (#9852015)

yes it is a really cool inovation but really what does it do? Personnaly I have never broke a CPU, and I doubt many others have. The only real benefit I see is for the producer, being able to sell flawed chips that still work. People will end up paying more to ensure their eFuses are not blown on their chips much like buying an LCD with guaranteed no dead pixels.

I have to change my name! (1)

CherniyVolk (513591) | more than 10 years ago | (#9851749)

I'm running to the Court House RIGHT NOW!
Changing my name to John Conner!

Re:I have to change my name! (1)

johnpaul191 (240105) | more than 10 years ago | (#9852229)

um, shouldn't you change your name FROM John Conner to something else? lay low till it's your time.

there is no future but what we make

wish it were more descriptive (3, Insightful)

tklive (755607) | more than 10 years ago | (#9851762)

while it does sound like a big step forward ,esp considering "eFUSE is technology independent, does not require introduction of new materials, tools or processes" . But how exactly is it selfhealing ?

nothing is mentioned abt the redundancy required for the reroutings... its obvious not all kinds of faults can be handled this way. so, do they try to predict possible faults and build in workarounds.. or do they just use the natural design to handle whatever can be ? does this affect the way they design circuits ... make more generic blocks etc ?..and maybe i didnt really understand the article...but isnt it more of a self correcting rather than self healing feature?

wish the article had more info...

Precisely. Not self-healing. (1, Insightful)

Anonymous Coward | more than 10 years ago | (#9852078)

That's just what I was thinking. It sounds like this can only be used to incorporate some redundancy.

Self-healing would be something completely different, imho -- the ability to rebuild damaged circuitry from some kind of schematic or remaining information, or maybe the ability to fall back to general instructions on the main CPU if a specialist unit like a GPU failed.

Morphing Caterpillar (0, Troll)

carboncopy79 (619156) | more than 10 years ago | (#9851771)

IBM introduces chip morphing technology Friday July 30, @09:43PM Rejected

Anyway, the chip morphing thing doesn't morph like a caterpillar to a butterfly.

Artificial Intelligence/Life (2, Insightful)

Ceriel Nosforit (682174) | more than 10 years ago | (#9851775)

"eFUSE reroutes chip logic, much the way highway traffic patterns can be altered by opening and closing new lanes," said Bernard Meyerson...

...And much like the neurons in the brain? Doesn't his have rather large significance to AI, or artificial life, for that matter? If the IBM solution is part software, who is to say that the software cannot be intelligent?

Re:Artificial Intelligence/Life (1)

r2q2 (50527) | more than 10 years ago | (#9851945)

That depends on your definition of inteligence. The thing just has a way to "heal" itself. Neurons in the brain are much more compilcated and have much better algorhitms for pattern recognision etc..

Re:Artificial Intelligence/Life (1)

Ceriel Nosforit (682174) | more than 10 years ago | (#9852041)

From younger days, I remember people speaking a lot about braincells dieing because they were not in use. I never devoted the time to figure out how this could be a Good Thing, but if a chip can do it too then surely it takes a step closer towards acting like the brain does. Maybe the brain works at a lower level than this IBM solution, and builds up its logical circuits by nuking selected cells?

With a limit? (5, Insightful)

usefool (798755) | more than 10 years ago | (#9851780)

Surely a chip cannot keep self-healing indefinitely can't it?

If it's capable of re-routing certain path when something went wrong, it'll eventually run out of alternative path, or the performance will be degraded to next to useless.

However it's certainly a good pre-emptive tool for mission critical machines, provided it has a way of informing the admin that it's dying, rather than secretly degrading.

Re:With a limit? (0)

Anonymous Coward | more than 10 years ago | (#9852234)

It always seems to work for Captain Picard.

Is it new or what? (1)

News for nerds (448130) | more than 10 years ago | (#9851785)

I'd seen a few or more articles about other dynamically reconfigurable chips such as this [] until now. In which point is IBM's one different from others? Making a single chip autonomic in itself is only about packing, I suppose.

On-Chip Sparing (2, Insightful)

Detritus (11846) | more than 10 years ago | (#9851811)

Sounds like it is most useful for permanently reconfiguring a chip to use spare functional units after problems are detected with the currently selected functional units.

Default Color Link (4, Informative)

Anonymous Coward | more than 10 years ago | (#9851816)

Re:Default Color Link (1)

vrmlknight (309019) | more than 10 years ago | (#9851848)

Thank God... Only if I had mod points.

Re:Default Color Link (0)

Anonymous Coward | more than 10 years ago | (#9852188)

Stop spamming that crap...

Who cares?

Re:Default Color Link (2, Informative)

aftk2 (556992) | more than 10 years ago | (#9852890)

On my work monitor, the gamma is so bad that reading the IT color scheme is really straining. doesn't do it...but IT sure does.

So with that in mind, because what else is there to do on a Saturday morning...I made a javascript bookmark that will replace whatever URL you're viewing in Slashdot with the default color scheme. []

Obviously, it won't work when you're browsing the top-level category pages of a particular section, but once you're in the articles, it should work at any level.


Re:Default Color Link (1)

danila (69889) | more than 10 years ago | (#9853863)

I read Slashdot in the PDA mode, you insensitive clod!

All previous attempts failed (0)

Anonymous Coward | more than 10 years ago | (#9851817)

All their previous attempts at self-healing chips using magic server pixie dust failed. They must have made quite an advancement.

Article lacking detail... but... (3, Insightful)

3seas (184403) | more than 10 years ago | (#9851828)

What this sounds like is a chip production success/failure rate improvement. As well as providing a bit more flexability in going from teh drawing board (design/theory) to production (testing/reality).

I think it is very interesting that they are using something that was considered to be bad in chip reality (electromigration), as a positive thing.

This is, in analogy, like how our bodies exist symboticly with many different germs and such, for without we'd die alot sooner.

I don't think what the article is talking about is anything like reprogrammable chips (FPGAs) as some may think by reading the article, but rather something automatically used once between the chip production line and its actual ongoing system use to auto test and correct any production anomolies per chip. (is this where we say bye bye Neo?)

if it aint broke... (1)

Anubis333 (103791) | more than 10 years ago | (#9851854)

Oh yeah, PCBs go bad all the time. Wait, processors and PCBs are probably the most reliable things in all of our electronics. When a processor or PCB breaks, it's due to something that these chips would not be able to 'self-heal', like horrible electrical damage or overheating. How about a *true* self healing HD or optical media (CD/DVD).

Maybe in a much larger scale, perhaps a motherboard that has reprogrammable chips, so that when your modem burns out from a power surge, it can reprogram some other modular chipset (audio, etc, to take over) would work. But that would mean every chip would be re-taskable, and more powerful than needed to do it's job, so that it could take distributed tasks from another. It would also be the end of specialized chips, and a lot more expensive.

Re:if it aint broke... (2, Insightful)

mabhatter654 (561290) | more than 10 years ago | (#9852086)

True for PCs, but an IBM server isn't anything close to a PC. considering almost all IBM i & p series servers are shipped with multiple deactivated processors as well as seperate processor cards to handle raw IO processing this is more of a RAID type thing. The purpose of RAID isn't to prevent the drive from failing but allow you to limp along until you can swap the drive. Also, remember that the IBM "big iron" has hot-spare, hot-swappable EVERYTHING...CPU, RAM, PCI cards, controllers, disks...

Re:if it aint broke... (1)

timeOday (582209) | more than 10 years ago | (#9853037)

I tend to agree for the most part.

But let's imagine you have a cluster with 1024 diskless nodes. At that scale, you need a ridiculously high per-node mtbf just to get anything done before your cluster breaks down. This might be a lot simpler and cheaper than trying to manage redundancy at higher levels.

Or maybe you're building a chip to control antilock braking, or for that matter an airliner or a space ship. Even if the odds of braking something handled by this mechanism are fairly low, it might still be worth it.

IBM's New Strategy (1)

LordHatrus (763508) | more than 10 years ago | (#9851858)

IBM's new engineering strategy - let the chip design itself! (I guess that's the ultimate form of corporate in-sourcing...)

eBONDAGE, eDRM, eVIBRATORS (2, Interesting)

t_allardyce (48447) | more than 10 years ago | (#9851865)

wow, now they've invented the 'eFUSE' maybe they could invent the 'eLAMP' and 'eDIODE' and 'eTRANSISTOR' - amazing 'e' components that can be controlled electronically!!

i know on-chip fuses (PROM?) have been around before and this seems to basically just be the same thing but more reliable and with 'e' on the end which im guessing stands for electromigration, which AFAIK is a problem with very small paths on chips that get screwed up by the flow of electrons and some sort of ionic-bondage-thingy interaction. Why call it eFUSE? probably because they have marketing idiots.

is this going to be used for DRM? if the chip detects tampering, could the same fuses that work in this system could be hijacked by the DRM to destroy the chip? What are the security implications of this? could someone fire off the fuses remotely?


t_allardyce (48447) | more than 10 years ago | (#9851873)

damn skimming, didnt see that electromigration was used _in_ the fuse, ok e-fuse makes sense, eFUSE is stupid, arguing about capitalisation is also abit stupid.


Anonymous Coward | more than 10 years ago | (#9852607)

it's an e-fuse because it isn't a fuse. It doesn't act like a traditional fuse, until we know how exactly it does behave we can't say much about how it will really affect anything. It sounds more like an electronic switch than a fuse, perhaps we will have to add efuse to our list of electronic components (diode, transistor, SCR, ect), if so lets hope they come up with a better name. ETIS (Electronically Toggled Inline Switch)?


t_allardyce (48447) | more than 10 years ago | (#9853046)

im pretty sure it is a fuse, we already have transistors and SCRs amd field effect transistors etc so i doubt its just another switch (no-where do they say it can be un-blown and they always call it a fuse). Why not call a switch a switch? you dont call a plane a car that can fly. its kinda like you have Read Only Memory - which can _only_ be read, but then you have Programmable Read Only Memory which can only be read except it can be written which is really stupid grammer, even worse is EPROM! and then EEPROM!

Customizing chips in the future (1)

Chris_Mir (679740) | more than 10 years ago | (#9851868)

Sounds to me we're heading to customizable chips in the future. Flawed designed chips (remember floating point calculations in the early pentium age) can be updated with a better design.

Potentially disastrous for desktop CPU's (3, Insightful)

wamatt (782485) | more than 10 years ago | (#9851871)

Lets hope IBM has the for-sight to ensure that the eFuse feature cannot be controlled by software.

Think about the latest worm going around taking your nice new 3200Mhz processor to an effective 100mhz by blowing all the fuses and crippling it.

I would guess though, because of the high R&D costs involved, this will only ever see its way into high-end servers.

Re:Potentially disastrous for desktop CPU's (1)

thpr (786837) | more than 10 years ago | (#9852035)

Think about the latest worm going around taking your nice new 3200Mhz processor to an effective 100mhz by blowing all the fuses and crippling it.

I'm guessing that won't happen. Chances are this feature was designed to work around problems in a high end server. Trying to keep a mainframe at 99.999% uptime requires the ability to adapt to hardware failure. Thus, this would be a part of the hardware, and the software would only know about it enough to send the message to your IBM support person to come fix the mainframe.

I would guess though, because of the high R&D costs involved, this will only ever see its way into high-end servers.

Consider other technologies that once seemed too expeisive for lower end systems. The pSeries (AIX machines based on POWER5) now have logical partitioning, once reserved for the zSeries (mainframes). I would expect this to be in mainframes for a chip generation (or two, worst case). Then expect to see it in iSeries and pSeries, followed by other IBM devices.

Re:Potentially disastrous for desktop CPU's (1)

danila (69889) | more than 10 years ago | (#9853876)

I would guess though, because of the high R&D costs involved, this will only ever see its way into high-end servers.

You have a poor grasp on basic economics. If something costed a lot in R&D, that is a good reason to mass produce it to spread the R&D costs over a lot of units. The only reason why something is limited to high-end products is that the MANUFACTURING costs are high and the article explicitely states that the fuses are added at no additional cost. So the only logical thing for IBM to do would be to use this technology nearly everywhere.

Heal My Brothers! (0)

Anonymous Coward | more than 10 years ago | (#9851887)

Heallllllllllllll My Brothers! Let the power of IBM compel thee!

Argh! (2, Informative)

PKC Jess (797453) | more than 10 years ago | (#9851892)

I know this is slightly offtopic, but how many people have ever actually BEEN to the IT section of slashdot? It hurts the eyes! On a more ontopic note, I am tres excited about this technology and I for one will keep a sharp eye on it. Go self healing technology!

Re:Argh! (0)

Anonymous Coward | more than 10 years ago | (#9852093)

... how many people have ever actually BEEN to the IT section of slashdot ...

I would imagine everyone reading your post ...

Re:Argh! (1)

phyruxus (72649) | more than 10 years ago | (#9852246)

>>I know this is slightly offtopic, but how many people have ever actually BEEN to the IT section of slashdot? It hurts the eyes!

>> I would imagine everyone reading your post ...

You know, just before I scrolled down to parent's parent, I thought to myself, "Aaah, my eyes!" then "Nah, I won't post that because everyone here is posting through this hellish beige haze. Don't bother"

All within 2 seconds, I scrolled down to parent's parent. While you're technically right, I now understand the nature of pedantry and will strive to avoid it (maybe).

Why am I replying to a score:0 from an AC? Oh yeah, pedantry... it's a vicious circle O

Re:Argh! (1)

realdpk (116490) | more than 10 years ago | (#9852772)

The color scheme is terrible, yeah.

A friend of mine pointed out that if you change the "it." in the URL to say... "games." it works fine, and is readable.

Nothing new to see folks (2, Informative)

shokk (187512) | more than 10 years ago | (#9851910)

Nothing new here. Virage Logic Corporation [] has had these designs on the shelf for their Self-Test and Repair (STAR) Memory System for some time now [] . It has been licensed to quite a few parties already for use in the various fabs so this is already being done.

Look through the website. IBM is even a customer.

Who benefits really? (5, Interesting)

hashwolf (520572) | more than 10 years ago | (#9851912)

When batches of silicon chips are made a number of them are always defective.

This technology is more beneficial for IBM than for us because it will allow IBM to SELL defective-but-self-repairable chips instead of SCRAPPING them. Because of this, it is highly probable that there will be no way end users will be able to garner info about to what extent the chip has already repaired itself.

If this is the case IBM will probably take one of the following roads:
1) Continue with the current manufacturing standards - this would yield chips with more longevity.
2) Manufacture chips with less stringent (and hence cheaper) manufacturing standards - although this would yield more defective chips, these won't be thrown away since they can self repair; they will be sold instead!

I really hope it's not option #2 they chose.

Re:Who benefits really? (0)

Anonymous Coward | more than 10 years ago | (#9851932)

What about Viruses. Wouldn't it possible for a virus to disable the hardware on these new chips? By burning all the fuses?

Re:Who benefits really? (2, Informative)

Detritus (11846) | more than 10 years ago | (#9851951)

They already do #2 with DRAM chips, to keep the yields at reasonable levels. Although I think they have to be tested and repaired before they are shipped from the factory.

Next up (1)

Lost Penguin (636359) | more than 10 years ago | (#9851963)

The liquid metal chip.
Overclocking makes the chip kill you!

Download an MP3, Blow Up your CPU? (3, Insightful)

iansmith (444117) | more than 10 years ago | (#9852026)

The first thought that entered my head when I read this was, "Great... now we can have hardware that can be designed to self-destruct on demand." Imagine you get sold a CPU with an expiry date... software licences for hardware, the old you don't own the chip but are just renting it.

IBM better be REAL carefull with this too. If it's possible to fool the chip into blowing these fuses, a virus could potentially damage millions of computers in a day of spreading.

As others mentioned, it is a neat trick, but a solution in search of a problem. CPU's just don't fail all that often to need something like this.

Re:Download an MP3, Blow Up your CPU? (2, Interesting)

jc42 (318812) | more than 10 years ago | (#9853481)

Older geeks will remember all the stories back in the 70's about people who paid big bucks for some fantastic new feature in IBM's cpus, and watched the IBM guy come over and "install" it by clipping a jumper wire or two on a board.

We're probably going to be hearing a lot more of those stories in the future as a result of this development. Except that the IBM guy won't have to actually come over and clip anything. They'll be able to do it across the Net by asking you to download an Install program, which will execute the commands to burn out the appropriate piece of the cpu chip, which turned out to send a "disable" signal to the circuitry that you just paid for.

Re:Download an MP3, Blow Up your CPU? (1)

Jerf (17166) | more than 10 years ago | (#9853758)

a virus could potentially damage millions of computers in a day of spreading.

We already live in that world. Viruses can already in theory toast BIOSes by flashing them with crap, or (equivalently for most people) destroying the OS. This new tech really wouldn't change anything (and BIOS destruction is likely to be "lower hanging fruit" for a while yet).

Wait for the exploits (3, Funny)

mangu (126918) | more than 10 years ago | (#9852033)

So, will we see a day when your computer catches a virus that transforms that gazillion GHz CPU into a 2 MHz 6502?

Memory - not logic (2, Insightful)

Anonymous Coward | more than 10 years ago | (#9852048)

Efuse and laser fuse are technologies for repairing memory defects, not for repairing logic defects.

From the article, it appears this innovation applies to the embedded memory on a logic chip:
"...all 90 nanometer custom chips, including those designed with IBM's advanced embedded DRAM technology"

"Mission Critical" (1)

TarrVetus (597895) | more than 10 years ago | (#9852100)

Maybe we should stop and think about the wonderful applications this could have. Systems that are meant to monitor information for years, like space satellites or Yucca Mountain systems ( [] ), could be equipped with these chips and last many more times longer than they would have before. And what about equipping space shuttles with these? Or public utilities like power grids?

This is a huge accomplishment!

Roll Back ?? (1)

proudlyindian (781206) | more than 10 years ago | (#9852138)

what happens if they realise that they have to roll back to the prev chip config ??

When did Skynet come online again? (1)

ddkilzer (79953) | more than 10 years ago | (#9852178)

How long before an emergent intelligence develops?

Great. Now Hackers can screw my CPU (1)

cardoso (90714) | more than 10 years ago | (#9852193)

Self-healing chips. OK, we can built a T1000 now. The "rerouting" scene is a classic in SCI-FI, but what if...

Some stupid worm uses a backdoor to start a haywire self-healing sequence?

Dave... Dave... Nah. More like... FZzzzzttt...

Douglas N. Adams (1)

skidoo2 (650483) | more than 10 years ago | (#9852225)

The man, whose initials (obivously, coincidentally?) were DNA, must've been some sort of prophet. Remember, Deep Thought was the SECOND best computer. And when it came up with the answer (you all know it, admit it!), and then determined that it was the correct QUESTION that we really needed, set to work on buildng a BETTER computer. I mean, the Hitchhiker's Guide was the prototype for the World Wide Web, and Deep Thought was the ultimate self-healing--in fact self-UPGRADING--computer. Maybe he was the Second Coming or something (DNA), and just didn't realize it.

Re:Douglas N. Adams (1)

Vesh Fey (801880) | more than 10 years ago | (#9853246)

ummm 42?? Is that right?

Radiation hardness? (2, Interesting)

kievit (303920) | more than 10 years ago | (#9852249)

Maybe this technology could be useful to make chips which can survive in radioactive environments like particle detectors in accelator laboratories or in satellites? (And if that is so then the military is probably also interested, to use them in battlefield drones.)

Re:Radiation hardness? (0)

Anonymous Coward | more than 10 years ago | (#9853325)

Exactly. Use of evolvable hardware techniques for developing RAD-HARD circuits are all the rage in the DoD right now--especially for detecting logic damage (although it seems this article is all about memory damage--crawl before you walk). Radiation damage has the potential to alter circuitry without destroying it--i.e. turn an AND gate into a NAND gate. If the chip can recognize that and reroute its logic, you have an identical chip in function with just a slightly different design. []

I'm missing something here (1)

Ridgelift (228977) | more than 10 years ago | (#9852274)

Seems to me this will lead to lazy production practices, not better-built chips. Maybe it's needed in order to keep pushing the Moore's Law marketing vision.

FPGAs? (2, Interesting)

andreyw (798182) | more than 10 years ago | (#9852311)

I was thinking about blowing away some money on a large FPGA and associated hardware and software.

I think it shouldn't be that mutch of an issues to program some part of the FPGA with the logic to reprogram the rest?

And start from there. Damn, this sounds so uber-call. Retargatable and reprogrammable logic really blends the line between software and hardware.

Upgrade about processors? (1)

Animekiksazz (653048) | more than 10 years ago | (#9852364)

Does this mean in the future I'll just download a new design off the internet for my processor and install it and suddenly have a more heat efficient processor, or say one more specific to my needs. Or perhaps have a few different layouts on disk that i can change to. That sounds pretty far out, but a neat idea.

Wonder Processor Powers: ACTIVATE! (1)

ArmorFiend (151674) | more than 10 years ago | (#9852371)

Wonder Processor Powers: ACTIVATE!
Shape of: A Laser Fuse!
Form of: A Morphing Microchip, uh, made out of ICE!

Self-healing cool, color scheme sucks (0, Troll)

scruffy (29773) | more than 10 years ago | (#9852412)

Self-healing chips are cool, but this color scheme sucks.

Maybe we can direct IBM's research toward self-healing color schemes.

Uh! its only been around for about 20 years (0)

Anonymous Coward | more than 10 years ago | (#9852531)

Ever hear of FPGA (Field Programmable Gate Arrays)
Ever heard of the companies Xilinx or Altera?

Dynamic Processor Sparing (1)

6800 (643075) | more than 10 years ago | (#9852602)

Heck, couple this with IBM's LPAR hypervisor on a power5 machine and you get so much redundancy and flexability it boggles the untrained mind! From: upport.pdf "Dynamic Processor De-allocation enables defective processors to be taken offline automatically, before they fail. This is visible to applications, since the number of online logical processors is decremented. An application that is attached to the defective processor can prevent the operation from being performed, so Dynamic Processor De-allocation may fail to remove the defective processor in some cases. Dynamic Processor Sparing transparently replaces defective processors with spare processors. It is transparent to applications, because spare processors are not in use by the system. The spare processor assumes the identity of the defective processor. Dynamic Processor Sparing is dependent on the presence of spare processors. A system has spare processors, if it is shipped with extra processors that the customer did not pay for. These processors may be activated using Capacity on Demand procedures. Both of these Reliability, Availability and Serviceability (RAS) features are enhanced by shared processor technology. Enhanced processor virtualization enables the hypervisor to implement Dynamic Processor Sparing in a manner that is completely transparent to the operating system. In effect, processor sparing becomes purely a hardware/firmware technology, which can be applied to any partition including Linux partitions for the first time. On the other hand, Dynamic Processor Deallocation is still implemented jointly between the operating system and firmware, although shared processor technology represents a significant advance in that it enables capacity and not logical CPUs to be removed. This means it will be more transparent to applications and middleware and can be applied to partitions with one logical CPU. Previously, it could only be applied if there were two or more logical processors."

Intel's Montecito (0)

Anonymous Coward | more than 10 years ago | (#9852673)

The next generation member of the Itanium family, codenamed Montecito has a feature that sounds similar.

From a ZDNet article [] :
Pellston technology, which will be inside Montecito, will allow a computer to kill malfunctioning sections of a chip's cache, a pool of memory embedded in the chip, and continue to use the chip. Currently, users have to replace these chips to prevent recurring errors.

When does this happen in the movie? (2, Funny)

Dachannien (617929) | more than 10 years ago | (#9852790)

The future is here!

Dark Helmet: "What happened to then?"
Col. Sanders: "We passed it."
Dark Helmet: "When?"
Col. Sanders: "Just now. We're at now, now."
Dark Helmet: "Go back to then."
Col. Sanders: "When?"
Dark Helmet: "Now."
Col. Sanders: "Now?"
Dark Helmet: "Now!"
Col. Sanders: "I can't."
Dark Helmet: "Why?"
Col. Sanders: "We missed it."
Dark Helmet: "When?"
Col. Sanders: "Just now."
Dark Helmet: "When will then be now?"
Col. Sanders: "Soon."

been done for a while (0)

Anonymous Coward | more than 10 years ago | (#9853149)

This has been done for ADCs and DACs for years now ... especially where a chip needs a "golden" reference resister ladder in the case of DACs .. we used to actullat trim the lenght of resisitive elements to get them exact and lower the INL/DNL of specific bits (read Signal to Noise ratio) ... a few years back it was decides to build the resistors out of a bunch of parallel higher order resistors with 0 ohm links connecting them ... if the resistance was too low .. take out a parallel resistor by electrically blowing a link on the Test ATE (automated test equipment) .. saving time and cost by having to do a second pass test on a laser trimmer .. just about every high quality audio dac on say a DVD player is trimmed this way
With regard to DRAM ... hmm i imagine they've taken it a step further and built in a check and re-route routine into the refresh cycle.

The future is here! (1)

livhan28 (749650) | more than 10 years ago | (#9853189)

"The future is here!"?!
But i was promised flying cars, flying cars!

FPGA technology (0)

Anonymous Coward | more than 10 years ago | (#9853328)

Instead of sorting the chips into 386 and 386SX, where the floating point unit was working or not, this would mean doing the changes on the fly and having some redundant circuitry to fix the errors. That could increase the chip yield dramatically. Lower prices...

No, it's not! (1)

NerveGas (168686) | more than 10 years ago | (#9853475)

The future is here!

No, it's not! It won't be here for another three... oh, never mind. Now it is here.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?