Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Stomps Into Flash Memory

ScuttleMonkey posted more than 7 years ago | from the hoping-for-solid-returns dept.

Intel 130

jcatcw writes "Intel's first NAND flash memory product, the Z-U130 Value Solid-State Drive, is a challenge to other hardware vendors. Intel claims read rates of 28 MB/sec, write speeds of 20 MB/sec., and capacity of 1GB to 8GB, which is much smaller than products from SanDisk. 'But Intel also touts extreme reliability numbers, saying the Z-U130 has an average mean time between failure of 5 million hours compared with SanDisk, which touts an MTBF of 2 million hours.'"

cancel ×

130 comments

Sorry! There are no comments related to the filter you selected.

MTBF (5, Interesting)

Eternauta3k (680157) | more than 7 years ago | (#18322697)

'But Intel also touts extreme reliability numbers, saying the Z-U130 has an average mean time between failure of 5 million hours compared with SanDisk, which touts an MTBF of 2 million hours.'"
Is this hours of use or "real time" hours? I don't know about other people but my pendrives spend most of their time disconnected.

Re:MTBF (1)

stratjakt (596332) | more than 7 years ago | (#18322759)

Think of the caching flash in a hybrid drive.

And why wouldn't you want your pen drive to last 2 1/2 times longer?

Would it be that you're an AMD "fan" and are rooting against your home teams rival?

Re:MTBF (1)

hackwrench (573697) | more than 7 years ago | (#18323395)

Where did you get the idea that he didn't want his pen drive to last 2 1/2 times longer? We get lied to so much that it's reasonable to be skeptical. Why are you trying to attribute ulterior motives to his skepticism?

Re:MTBF (0)

Reverend528 (585549) | more than 7 years ago | (#18322783)

Did they really test these for 5 million hours or are they just pulling the number out of their ass?

Re:MTBF (1)

textstring (924171) | more than 7 years ago | (#18322839)

Any statisticians on slashdot?

Doubtful. (0)

Kadin2048 (468275) | more than 7 years ago | (#18323021)

Did they really test these for 5 million hours or are they just pulling the number out of their ass?

Well, given that 5 million hours is equal to 570.39 years, I'm going to guess that no, they didn't actually test them for that long.

MEAN time between failures, what does that MEAN (3, Informative)

tepples (727027) | more than 7 years ago | (#18323181)

Did they really test these for 5 million hours or are they just pulling the number out of their ass?
It's a mean time between failures. An MTBF figure of 5 million hours means they tested 500,000 of them for 300 hours, and 30 of them failed. A rate of 150 million unit hours per 30 failures equals 5 million unit hours per failure.

Re:MEAN time between failures, what does that MEAN (2, Insightful)

hackwrench (573697) | more than 7 years ago | (#18323439)

That makes about as much sense as declaring that they tested 5 million of them for 1 hour and only one of them failed.

Re:MEAN time between failures, what does that MEAN (1)

tepples (727027) | more than 7 years ago | (#18323549)

That makes about as much sense as declaring that they tested 5 million of them for 1 hour and only one of them failed.
Which is, in fact, equally valid. The MTBF for such a test session would be the same: 5 million unit hours.

Re:MEAN time between failures, what does that MEAN (1)

mr_mischief (456295) | more than 7 years ago | (#18325219)

IANA product tester.

It would be mathematically equal, but I'm not sure it'd be equally _valid_. Given the initial defects and the possibility of misdesign causing heat-related losses or such, some stretch of time is really necessary. Testing 5 million for one hour proves little more than that the expected life is longer than one hour. Testing 200,000 for 25 hours would likely, despite the smaller but still sizable sample size, mean much more. Testing 20,000 at 250 hours would likely mean more still.

5,000 units at 1000 hours (41 days) or 10,000 units at 500 hours seems much more likely. After all, why make hundreds of thousands of something you're not sure are going to work at all?

Re:MEAN time between failures, what does that MEAN (2, Insightful)

dgatwood (11270) | more than 7 years ago | (#18325757)

Or, depending on how you look at it, they are both equally invalid if, in fact, the products have a thermal failure in which a trace on the board melts with a period of 2 hours +/- 1 hour and you've just started hitting the failures when testing concludes. The shorter the testing time, the more thoroughly meaningless the results, because in the real world, most products do not fail randomly; they fail because of a flaw. And in cases where you have a flaw, failures tend to show clusters of failures at a particular age or level of use. For example, I find that the MTBF for cars and hard drives tends to be the duration of the warranty period plus 1-4 weeks. :-)

MTBF is approximately useless unless product failures are distributed with a gaussian distribution around the mean. You could have a long tail with a few of them lasting a decade and most of them dying after a week and still have a MTBF figure measured in years, depending on how the testing was done, and specifically on whether they reached the magic cluster death point during the testing period or not. The odds of accidentally hitting such a degenerate case on a single drive are small, but they add up quickly when you're talking about an entire industry worth of drive models. Were that not the case, a whole lot of really awful hard drive models would never have made it out of testing, IMHO.

I wish manufacturers would be more transparent about their testing methodologies. My gut feeling, though, is that many of them have poor practices and don't want the world to know. This is one of the rare cases where the "if you have nothing to hide, you shouldn't keep this information private" argument actually holds some weight, IMHO---this and crypto research. :-)

Re:MTBF (1)

Skhaatra (904195) | more than 7 years ago | (#18325333)

When you heat electronic devices they have been proven to fail at a higher rate. The increase in temperature and the increase in failure rate has a known relationship. Therefore you can heat up the equipment when you test it, and that will simulate it being used for a longer period of time. So, for example, you can heat up the flash disks by 50 degrees, then test 100 of them over 2 weeks, and then extrapolate from that what the failure rate would be at room temperature. Hence the ability to state values that a very high. Another way of coming up with an MTBF is based on the MTBF of all the component parts, and how they are connected (with what, in series, in parallel etc), without actually testing the entire flash drive for MTBF. Usually both are used, and any errors in the calculated MTBF is corrected with the actual from testing, which is then corrected with the actual from out in the field due to returned parts. Cheers, Ed.

Re:MTBF (1)

dgatwood (11270) | more than 7 years ago | (#18325963)

I am not a product tester, so I can only go with what I've read on the subject, but what you describe just doesn't sound valid to me in general electronics testing.

First, according to the Google results, thermal considerations had no statistically significant impact on failure rate. Yes, thermal failures can shorten life expectancy (particularly of hard drives), but in a real-world environment, there are far more things besides heat that can cause drive failures, including metal fatigue, bearing fluid leakage, electronics failures (which are not generally thermal in nature), lightning or surges, etc. To raise the temperature and shorten your MTBF testing would be disingenuous to the point that it borders on fraud.

Also, for solid state hardware with no moving parts, thermal failure is somewhere down around number 100 in the things that cause device failure. My best guess is that the number one cause of flash drive failure is the physical breakage of the traces leading from the USB connector, the number two cause of flash drive failure is other physical stress damage causing board breakage, and the number three cause is idiots putting it in the washing machine, dropping it in the toilet, etc.

That's not saying that thermal aging testing isn't useful for some products (electrolytic and paper/oil capacitors, plastics, etc.), but you can't realistically expect me to accept reliability statistics based on that alone, especially for silicon and other metal products, where I've read that assumptions of temperature-to-failure-rate correlation are not at all valid.

Manufacturers of equipment should do thermal testing at temperature extremes to verify that the hardware MTBF does not dramatically change while the temperature is within acceptable limits. That is not the same thing as forging your MTBF numbers by raising the temperature and saying "at 120F, that's equivalent to running it twice as long" or some other such nonsense. Indeed, the very fact that you suggested such a testing procedure is precisely why manufacturers should be required to publish their testing methodology used to determine MTBF so that it can be scrutinized thoroughly and corrected where necessary....

Re:MTBF (4, Funny)

smallfries (601545) | more than 7 years ago | (#18325553)

Yes of course they tested them for 5 million hours, after all it's only 570 years. Don't you know your ancient history? The legend of Intelia and their flashious memerious from 1437AD?

Re:MTBF (0)

eluusive (642298) | more than 7 years ago | (#18322803)

5 000 000 hours = 570.397764 years I don't know how Intel came up with those numbers, but I'd be happy if I lived to see my SanDisk flash keep working at only 2 000 000 hours.

Re:MTBF (3, Insightful)

Target Drone (546651) | more than 7 years ago | (#18323133)

5 000 000 hours = 570.397764 years I don't know how Intel came up with those numbers

From the wikipedia article [wikipedia.org]

Many manufacturers seem to exaggerate the numbers to sell more products (i.e.) Hard Drives to accomplish one of two goals: sell more product or sell for a higher price. A common way that this is done is to define the MTBF as counting only those failures that occur before the expected "wear-out" time of the device. Continuing with the example of hard drives, these devices have a definite wear-out mechanism as their spindle bearings wear down, perhaps limiting the life of the drive to five or ten years (say fifty to a hundred thousand hours). But the stated MTBF is often many hundreds of thousands of hours and only considers those other failures that occur before the expected wear-out of the spindle bearings.

Re:MTBF (1)

omeomi (675045) | more than 7 years ago | (#18322897)

This FAQ seems to suggest that MTBF would imply actual hours of active use:

http://www.faqs.org/faqs/arch-storage/part2/sectio n-151.html [faqs.org]

There is significant evidence that, in the mechanical area "thing-time" is much more related to activity rate than it is to clock time.

Why? what does it matter (0)

WindBourne (631190) | more than 7 years ago | (#18322913)

2 million hours vs 5 million hours. There are ~10K hours in a year. With 2 million hours, there is more than 200 years. If you are still using the same computer in 200 years, I will be either impressed or scared.

Re:Why? what does it matter (1)

26199 (577806) | more than 7 years ago | (#18322995)

It matters a lot if you're using 200 of them at your company...

Re:Why? what does it matter (2, Informative)

Jarjarthejedi (996957) | more than 7 years ago | (#18323057)

MTBF matters because it's random. They're not saying that every drive will last that long, they're saying that the average drive will. Therefore the chance of any drive failing within a reasonable amount of time drops the more the mean time is. So with a 5000000 MTBF the chance of any one drive failing in your life time is incredibly minuscule.

Re:Why? what does it matter (1)

twiddlingbits (707452) | more than 7 years ago | (#18323495)

2 Million hours MTBF means the time to a failure is a lot longer than my lifetime too. Overkill isn't always better.

Re:Why? what does it matter (1)

LordSnooty (853791) | more than 7 years ago | (#18325203)

No, more hours can be good - just think, you can pass down your Family Photo Thumbdrive to your kids, who might be able to pass it onto their grandkids, if USB is still available...

Re:Why? what does it matter (5, Funny)

Reason58 (775044) | more than 7 years ago | (#18323499)

MTBF matters because it's random. They're not saying that every drive will last that long, they're saying that the average drive will. Therefore the chance of any drive failing within a reasonable amount of time drops the more the mean time is. So with a 5000000 MTBF the chance of any one drive failing in your life time is incredibly minuscule.
In 20 years from now, when hard drive capacity is measured in yottabytes, will you really be carrying around a 512MB thumbdrive you bought for $20 back before the Great War of 2010?

Re:Why? what does it matter (2, Insightful)

LoudMusic (199347) | more than 7 years ago | (#18323733)

In 20 years from now, when hard drive capacity is measured in yottabytes, will you really be carrying around a 512MB thumbdrive you bought for $20 back before the Great War of 2010?
How do you know it's going to happen in 2010? Are you SURE it's going to happen in 2010? That only gives me 3 years to prepare the shelter ...

Re:Why? what does it matter (0, Flamebait)

Surt (22457) | more than 7 years ago | (#18325937)

Barring Bush declaring the constitution revoked, you probably ought to bet on the great war starting before his term in office expires. That gives you even less time to prepare the shelter.

The odds of the next president being an outspoken war proponent are very low.

Re:Why? what does it matter (1)

jrumney (197329) | more than 7 years ago | (#18323517)

MTBF matters because it's random. They're not saying that every drive will last that long, they're saying that the average drive will.

False advertising is illegal in many countries. This 5 million hours figure (and SanDisk's 2 million) seems to be based on much shorter tests of large numbers of devices and extrapolating the results based on the assumption that this randomness is evenly distributed. They MUST know that this assumption is wrong. As taught in basic engineering courses, failure distributions are basically U shaped, with most failures on the steep left hand side being detected during manufacture (and hence not counted in their MTBF figures), and those on the right hand side optimally starting to increase shortly after the warrantee expires by fine tuning the manufacturing process to minimise production costs and maximise unit sales.

Re:Why? what does it matter (1)

dgatwood (11270) | more than 7 years ago | (#18326083)

So with a 5000000 MTBF the chance of any one drive failing in your life time is incredibly minuscule.

I have a box full of dead hard drives that would disagree with you, and I didn't typically use lots of drives at once until fairly recently, so most of those failures were consecutive single drive failures....

The numbers are utterly meaningless for individual consumers. They are only really useful at a corporate IT level with dozens or hundreds of drives to figure out how many spares you should keep on hand. Beyond that, they're just B.S. marketing drivel to try to make you believe that their drives are better, and thus keep buying them even after your last eight drives crashed.... And honestly, I wouldn be surprised if the numbers were even in the ballpark for large numbers of drives, judging from the failures I've seen over the years. My experience has been that particular models of drives either work nearly forever or die repeatedly, and there's not much in-between.... :-)

Re:Why? what does it matter (1)

omarques (685690) | more than 7 years ago | (#18323371)

(...)in 200 years, I will be either impressed or scared.
Really? I will be dead.

Warning (1)

EmbeddedJanitor (597831) | more than 7 years ago | (#18323011)

The MTBF only applies to failures at ther NAND level, not the software level.

In most cases the part that fails is the software, not the hardware. For example, FAT is a terrible way to store data you love. To get reliability you need to use a flash file system that is designed to cope with NAND.

Better than FAT. (2, Interesting)

Kadin2048 (468275) | more than 7 years ago | (#18323103)

To get reliability you need to use a flash file system that is designed to cope with NAND.

Any suggestions of possible candidate filesystems?

Right now, most people that I know of, use flashdrives to move data from one computer to another, in many cases across operating systems or even architectures, so FAT is used less for technical reasons than because it's probably the most widely-understood filesystem: you can read and write it on Windows, Macintosh, Linux, BSD, and most commercial UNIXes.

However, a disk that was going to be installed in a single machine could be more flexible; it would be somewhat more acceptable to use a specialized filesystem there (as long as the filesystem wasn't so specific as to make recovery impossible), particularly if you wanted to maximize reliability.

Re:Better than FAT. (1)

GreyWolf3000 (468618) | more than 7 years ago | (#18323497)

YaFFS2

Re:Better than FAT. (1)

Intron (870560) | more than 7 years ago | (#18323999)

I was going to suggest ReiserFS, but I heard it had some mortality problems.

Wear leveling in hardware (1)

tepples (727027) | more than 7 years ago | (#18323237)

For example, FAT is a terrible way to store data you love. To get reliability you need to use a flash file system that is designed to cope with NAND.
Or you could create a FAT partition inside a file, stick that file on a flash file system, and mount the FAT partition on loopback. The microcontrollers built into common CF and SD memory cards do exactly this, and this is why you only get 256 million bytes out of your 256 MiB flash card: the extra 4.8% is used for wear leveling, especially of sectors containing the FAT and directories.

Re:Wear leveling in hardware (3, Interesting)

EmbeddedJanitor (597831) | more than 7 years ago | (#18323415)

The cards with internal controllers do something like you say and you can thead the SD or SmartMedia specs for details. They manage a "free pool" primarily as a way to address bad blocks, but this also provides a degree of wear levelling.

Putting a FAT partition onto such a device, or into a file via loop mounting, only gives you wear levelling. It does not buy you integrity. If you eject a FAT file system before mounting it then you are likely to damage the file system (potentially killing all the files in the partition). This might be correctable via a fschk.

Proper flash file systems are designed to be safe from bad unmounts. THese tend to be log structured (eg. YAFFS and JFFS2). Sure, you might lose the data that was in flight, but you should not lose other files. That's why most embedded systems don't use FAT for critical files and only use it where FAT-ness is important (eg. data transfer to a PC).

Re:MTBF (0)

Anonymous Coward | more than 7 years ago | (#18325909)

Mutant Teenage Beastly Fido?

Info. (2, Informative)

Anonymous Coward | more than 7 years ago | (#18322749)

Wear-levelling algorithms. Is there a resource for finding out which algorithms are used by various vendors' flash devices? And links to real algorithms? Hint: not some flimsy pamphlet of a "white paper" by sandisk.

I want to see how valid the claims are that you can keep writing data on a flash disk for as long as you'll ever need it. Depending on the particular wear-levelling algorithm and the write pattern, this might not be true at all.

Re:Info. (3, Informative)

EmbeddedJanitor (597831) | more than 7 years ago | (#18323095)

These claims will be made at the flash level (ie. ignoring what the block managers and file systems do).

Different file systems and block managers do different things to code with wear levelling etc. For some file systems (eg. FAT) wear levelling is very important. For some other file systems - particularly those designed to work with NAND flash - wear levelling is not important.

hmm (2)

mastershake_phd (1050150) | more than 7 years ago | (#18322763)

read rates of 28 MB/sec

Shouldn't a solid state device be able to be read faster than a spinning disc?

Spinning states (2, Informative)

HomelessInLaJolla (1026842) | more than 7 years ago | (#18322859)

These days the platters spin so fast and the data density is so high that the math just might work out the same for a solid state device and the spinning disc--ie. the spinning disc may, mathematically, approximate the solid state device.

At first thought I agree, though. Maybe there's something inherent in the nature of the conducting materials which creates an asymptote, for conventional technologies, closing in around 30 mb/sec.

Re:Spinning states (1, Funny)

Anonymous Coward | more than 7 years ago | (#18324113)

> Maybe there's something inherent in the nature of the conducting materials which creates an asymptote, for conventional technologies, closing in around 30 mb/sec.

No. That's crazy hobo talk.

Re:hmm (1, Informative)

Anonymous Coward | more than 7 years ago | (#18323001)

Shouldn't a solid state device be able to be read faster than a spinning disc?

Yes and no.

With random access the bottleneck is going to be superb - random reads are going to be far faster than any mechanical drive (where waiting for the drive and heads to move) are a real problem.

With sustained transfers, speeds are going to depend on the interface - which in this case is USB 2.0 - which has a maximum practical transfer rate of... about 30MB/s.

What's needed are large flash drives with SATA 3 interfaces.

Re:hmm (2, Informative)

SatanicPuppy (611928) | more than 7 years ago | (#18323027)

Not necessarily...Three platters spinning at 7200rpm is a lot of data.

The place where you make up time with solid state is in seek time...There is no hardware to have to move, so finding non-contiguous data is quicker.

Hard drive heads aren't used in parallel (1)

tepples (727027) | more than 7 years ago | (#18323377)

Not necessarily...Three platters spinning at 7200rpm is a lot of data.
Due to limitations in the accuracy at which a servo can position a hard drive's read and write heads, a hard drive reads and writes only one platter at a time. But you're still right that 7200 RPM at modern data densities is still a buttload of data flying under a head at once.

Re:Hard drive heads aren't used in parallel (1)

dgatwood (11270) | more than 7 years ago | (#18326121)

That's true, but a seek to read the same track on the next platter should be very quick, as IIRC, a lot of drive mechanisms do short seeks in a way that significantly reduces the settle time needed compared with long seeks.

Re:hmm (1)

Kenja (541830) | more than 7 years ago | (#18323063)

Not realy, the advantage of solid state VS magnetic media is in the seak time not the transfer rate.

Re:hmm (1)

mastershake_phd (1050150) | more than 7 years ago | (#18323263)

Solid state RAM can do GBs per second.

Yeah, if you RAID them (1)

tepples (727027) | more than 7 years ago | (#18323333)

Shouldn't a solid state device be able to be read faster than a spinning disc?
Yes. You could fit a RAID of twenty miniSD cards into an enclosure smaller than a laptop hard drive. Panasonic P2 memory cards [wikipedia.org] work this way. However, Intel sells flash chips and must quote the specifications for individual chips.

Re:hmm (1)

Joe The Dragon (967727) | more than 7 years ago | (#18323593)

The USB bus slows it down firewire 400 is faster.

Re:hmm (1)

dbIII (701233) | more than 7 years ago | (#18324929)

Unfortunately not - which is why the MS virtaul memory on flash should be renamed StupidFetch. Seek times are better and fragmentation is not an issue, so it may be better than a filesystem on a really full disk that has got into a mess over time - but otherwise virtual memory on disk will be dramaticly faster.

Reason to switch #341 (1)

richdun (672214) | more than 7 years ago | (#18322773)

We know Apple commands a great deal of pricing advantage with their current supplier(s) (Samsung, if memory serves). But, could this be another reason to switch, by picking up Intel CPUs and Intel flash memory chips? Cringely could be getting closer to actually being right - if Intel buys Apple, suddenly iPod, iPhone, Mac, etc. production could go in-house for a huge chunk of the parts.

Just had to throw an Apple reference in there. It's /. law or something.

Apple would lose all its value over time (2, Insightful)

rolfwind (528248) | more than 7 years ago | (#18323151)

Right now, Apple has 90% of its value due to the vision of Steve Jobs and the products he helps create. This is not to say that there aren't many people involved in Apple's success nor that he even thinks up of most of the products like iPod - but he does a great job in realizing those products and positioning them in the marketplace.

Unless Intel can keep Jobs and gives him free reign, Apple would soon go rotten from a mediocre vision of someone who just doesn't get the Apple culture and is looking at the spreadsheets when doing products and releasing "Me Too!" items that look and act like everyone elses. Just look at the stagnation of Apple throughout the late 80's and 90's. Intel certainly isn't that company.

And I think Jobs is too much of a control freak to voluntarily hand himself over to some corporate masters just for a few dollars better margin on a few components.

Intel will never buy Apple (1)

sjbe (173966) | more than 7 years ago | (#18323407)

if Intel buys Apple


It's fun to ponder and an interesting combination but it will never happen unless both the management of Apple and Intel both suffer severe brain aneurysms. Why? Culture and the difficulties of vertical integration. Also, if you want to see the dangers of vertical integration, look no further than Sun and SGI. If you are really big like IBM it's possible to be a soup to nuts vendor but even then it is rare. IBM after all just got out of the PC business which is Apple's core market. It's just really hard for management to competently deliver every aspect of the product. It's not impossible but it is really really really hard.

Regarding culture, Intel has a notoriously combative culture. Intel's products are generally high quality but they aren't consumer products. Intel doesn't have consumer DNA in them really. Their products are for vendors and techies. Kind of like Nutra-sweet, they've mastered the "branded ingredient" strategy (i.e. "Intel Inside" and Centrino) but they don't really sell to consumers directly. You don't buy an Intel PC, you buy a Dell or HP with "Intel Inside". Apple conversely is one of the best at designing elegant consumer products but doesn't really work deeply with other vendors since most of their sales are to consumers. If Apple had to work with other computer vendors in a big way in all likelihood most of the magic of their products would be lost. Both companies have engineers, salespeople, marketing, and company structures to support these VERY different strategies. It would be a herculean task to make the two companies work well together.

Re:Reason to switch #341 (1)

natebizu (960267) | more than 7 years ago | (#18323653)

This just in: "Intel buys Apple"
in another story, "Microsoft buys AMD"

Re:Reason to switch #341 (0)

Anonymous Coward | more than 7 years ago | (#18323829)

Except that Intel don't make chips that are suitable for the iPod and iPhone anymore.

Ah, good, more competition (1)

rolfwind (528248) | more than 7 years ago | (#18322785)

Maybe in the next few generation, we'll get the best of both worlds, much higher capacities and reliability.

Need to check out how Intel is actually backing up it's reliability claim - if they just replace the drive when it stops working - that may be a cheap proposition for them (it fails a year or two later, even a currently highend drive by that time the drive is small to relative current numbers and they can replace it with a cheap one). Hate for this to become a war with who can fiddle with the numbers the best while the overall quaility remains the same in reality.

For how long? (2, Interesting)

EmbeddedJanitor (597831) | more than 7 years ago | (#18323589)

Intel is a weird company when it comes to the way they do business and I am suprised they are stepping into NAND flash space. The writing was on the wall since they are members of ONFI http://www.onfi.org/ [onfi.org]

Intel bough the StrongARM off Digital, then sold it, presumably to focus on "core business" of x86 etc. They've done similar moves with their 8051 and USB parts. It is hard to see what would attract them to NAND flash which has very low margins. NAND flash now costs less than 1 cent per MByte, about a fifth or so of what it cost a year back, and there seems to be no slowing.

Intel seems to work well with high margin devices (Pentium etc) and not so well with low margin parts (USB chipsets, PXAxxx etc).It is hard to see Intel keeping in the NAND business for very long.

Re:Ah, good, more competition (0)

Anonymous Coward | more than 7 years ago | (#18325669)

Intel is calculating these reliability numbers using the same metrics as everyone else in the Flash world. The increased performance and reliability numbers are because Intel is using Single Level Cell flash in the SSD, also explaining the lower densities. These are generally the same chips as the upcoming "Robson" Nand caching scheme on the upcoming Santa Rosa chipset. They are increasing the number of dies within the TSOPs (stacking) for SSD drives.

Intel is selling the MLC flash to Apple (for NANOs) and others, where the cycle count and performance requirements are lower and the premium is on the density.

verification (1)

omnilynx (961400) | more than 7 years ago | (#18322823)

I believe I will wait for third-party verification of those numbers. Specifications from the producers tend to have somewhat... generous fine print.

Lies, damn lies, and MTBF claims (0)

Anonymous Coward | more than 7 years ago | (#18322905)

But Intel also touts extreme reliability numbers, saying the Z-U130 has an average mean time between failure of 5 million hours compared with SanDisk, which touts an MTBF of 2 million hours.

Looks to me like Intel simply has less of an aversion to lying. Remember their old Pentium add which claimed surfing the 'net would be sooooo much faster with their new Pentium, 'cause it's not like it's actually limited by the speed of you network connection?

Re:Lies, damn lies, and MTBF claims (1)

shaitand (626655) | more than 7 years ago | (#18323203)

No but I have seen webpages rendered on a Pentium 1 versus a Pentium III with the same amount of ram and on the same network connection. My guestimate is that about the P2 400mhz w/256mb ram and background processes cleaned up is where the machine doesn't matter and the network connection is the only substantial bottleneck.

Downloading the content is not the only aspect of browsing the web, the machine must parse and render that content as well.

Incremental layout and web accelerator (2, Informative)

tepples (727027) | more than 7 years ago | (#18323489)

Remember their old Pentium add which claimed surfing the 'net would be sooooo much faster with their new Pentium, 'cause it's not like it's actually limited by the speed of you network connection?
It wasn't entirely false advertising. A web browser on a faster computer can run more iterations of the incremental layout code, so that the data looks like it's coming in faster. A faster computer can run more complex text and mark-up compression in human-acceptable time, allowing for "web accelerator" software that became especially popular during the wane of dial-up.

WTF? (3, Insightful)

xantho (14741) | more than 7 years ago | (#18322909)

2,000,000 hours = 228 years and 4 months or so. Who the hell cares if you make it to 5,000,000?

Re:WTF? (4, Insightful)

Kenja (541830) | more than 7 years ago | (#18323029)

"2,000,000 hours = 228 years and 4 months or so. Who the hell cares if you make it to 5,000,000?"

Mean time between failures is not a hard perdiction of when things will break. http://en.wikipedia.org/wiki/MTBF [wikipedia.org]

Re:WTF? (1)

oddaddresstrap (702574) | more than 7 years ago | (#18324565)

Mean time between failures is not a hard perdiction of when things will break.

True, but since it supposed to be the average time between failures, it had better be closer to 228 than, say, 5 most of the time or the use of the statistic as a selling point is utterly bogus (some would say fraudulent). It would help to know what the (guesstimated) standard deviation is. The implication of a MTBF of 2x10^6 hours is that it will easily outlast you.

Re:WTF? (1)

SeaFox (739806) | more than 7 years ago | (#18324821)

Mean time between failures is not a hard perdiction of when things will break.

True, but even if the drive lasts half as long as the manufacturer's MTBF claim, your data will still outlive you.
.
.
. ...wow, why do I feel the urge to say that with a Russian accent.

Re:WTF? (1)

biocute (936687) | more than 7 years ago | (#18323235)

Well, for those who care about the difference between 250 FPS and 251 FPS in a graphic card.

Re:WTF? (1)

merreborn (853723) | more than 7 years ago | (#18323445)

2,000,000 hours = 228 years and 4 months or so. Who the hell cares if you make it to 5,000,000?


MTBF doesn't work like that. You can, however, directly translate it to a likelyhood of failure over a year; that is, if a 1 million hour MTBF corresponds to a 1% chance of failure over the course of a year, then a 5 million hour MTBF corresponds to an even lower likelyhood of failure over the course of a year.

Re:WTF? (0)

Anonymous Coward | more than 7 years ago | (#18323869)

You can, however, directly translate it to a likelyhood of failure over a year;

Unless there's some surprising statistical theorem that I've somehow missed involved here, you can only do that if you assume that failure is equally for any year n in which the device has not already failed. Given that this seems unlikely, I'd think that you can not translate it directly to a likelihood of failure over a given year. (It might be a useful statistic anyway, though!)

Re:WTF? (1)

vertinox (846076) | more than 7 years ago | (#18323707)

MTBF is an average. So things don't automatically break at after 2 million nor do all of them last till then.

The higher the number, the statically less you'll likely get hit with a drive failure.

Think of it like getting in a car accident in the country road versus getting in a car accident in the busy city. You might go your entire life in both places never getting in an accident, but in both places you always have the possibility you will wreck on your first day of driving.

However, you fare much better on the country road for your insurance rates because statistically you will get in less accidents than you do if you lived in the city.

Sounds familiar (1)

stokessd (89903) | more than 7 years ago | (#18322951)

8 GB should to be enough for anybody...

Re:Sounds familiar (1)

Joe The Dragon (967727) | more than 7 years ago | (#18323631)

Not vista users as M$ wants you to have 15GB

2 million hours? (2, Insightful)

jgoemat (565882) | more than 7 years ago | (#18322977)

So on average, it will last 570 years instead of 228?

Re:2 million hours? (0)

Anonymous Coward | more than 7 years ago | (#18324979)

Yes. And unless the standard deviation is 0, almost no actual samples will last exactly 570 years. And knowing MTBF says almost nothing useful to you.

o yeah? (-1, Troll)

Anonymous Coward | more than 7 years ago | (#18322999)

my dick has an average mean time between failure of 8 million hours

Failures (1)

syntap (242090) | more than 7 years ago | (#18323017)

But Intel also touts extreme reliability numbers, saying the Z-U130 has an average mean time between failure of 5 million hours compared with SanDisk, which touts an MTBF of 2 million hours.'

Yes, because I should be concerned that my pr0n collection isn't making it all the way to my laptop for traveling purposes.

Wake up! (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#18323093)

Conventional warfare --fought by tanks, airplanes, ships, etc. -- runs on oil. The US "won" the first and second world wars due to overwhelmingly superior native oil reserves and related infrastructure.

Permanent bases in Iraq, positioned to guard enormous oil supplies, will permit military planners to conduct regional aggression without worrying about long supply lines for fuel (the undoing of Japan and Germany in WWII -- see "The Prize" by Daniel Yergin). Taking the province of Khuzestan away from Iran, for example, is a local conventional combat objective with immense geopolitical benefits to the aggressor.

The real unanswered questions have to do with the beneficiaries of this program. The US sees little gain and ruinous loss. In many ways the US military forces in Iraq are now operating against the national interest, as misguided mercenaries.

Israeli foreign policy has infiltrated every level of American foreign policy -- look at the affiliations of the PNAC signatories and key "advisers" in the administration (Michael Chertoff -- Israeli citizen in charge of US Homeland Security, for example). It could be that Israel has found a way to use the immense resources of the US to fight a proxy war for Israeli benefit. Israel may envision a day when they will take over effective operation of the bases built by the US, fulfilling their stated objective of becoming the Middle East's sole superpower.

Other beneficiaries of the stealing of Iraqi oil include multinational corporations and perhaps the Hashemites (rather than the Saudis). Iraq's value is being stolen, certainly. Tragically, history will show that America's worth has also transferred to invisible thieves. We supply the cannon fodder and revenue to build their bases in Iraq.

Oil will remain important, but conventional warfare will soon decline. The next generation of planetary stealing will be done with bioweapons, nanoweapons, orbiting ray generators, etc. The bases in Iraq will be abandoned within a generation, but by then much of consequence will have been stolen.

5 million hours MTBF (2, Insightful)

Dachannien (617929) | more than 7 years ago | (#18323315)

That figure doesn't tell me jack. What I want to know is if I order 100 of these things, how many of them will fail just after the warranty expires?

Re:5 million hours MTBF (1)

2nd Post! (213333) | more than 7 years ago | (#18324939)

Half as many as if you had bought from SanDisk?

Re:5 million hours MTBF (1)

Dachannien (617929) | more than 7 years ago | (#18325373)

Who can tell? Maybe half of them fail five minutes after you first plug it in, and the other half fail ten million hours later. Maybe only a very few fail within the first five years, and the failures start picking up after that. Nobody can tell from this figure, which is pure marketroid-speak without any practical application.

WW0t... fp (-1, Flamebait)

Anonymous Coward | more than 7 years ago | (#18323397)

Was that supposed to be a pun? (1)

thelenm (213782) | more than 7 years ago | (#18323409)

Sheesh, I read the headline and thought Intel had developed some buggy chip that somehow stomps on flash memory. Nice, well, at least it got my attention.

Useful Size (1)

SilentDissonance (516202) | more than 7 years ago | (#18323597)

I'd like to see a semi-affordable (around $250) solid state storage device in a standard form factor and connection (3.5" SATA), at a decent size (15GiB).

This would be an ideal boot and OS drive for me. / and most of it's directories, along with a decent sized swap (2-3 GiB). Put /home and /tmp on a 'normal' large drive (standard SATA drive of decent speed, RAID array, etc.).

I've thought about doing this for a while, in fact... but every time I research it out I either come to dead ends with no price info, high prices, or odd interface requirements that aren't suitable for a desktop machine.

Re:Useful Size (1)

b1scuit (795301) | more than 7 years ago | (#18324467)

It would be wiser to put the swap partition on the conventional disks. And for the love of god, buy some RAM. (or quit wasting all that disk space) 2-3GB of swap is silly, and if you actually find yourself using that much swap, you really need more RAM.

Re:Useful Size (1)

networkBoy (774728) | more than 7 years ago | (#18324497)

I'm looking for a Compact Flash to 2.5" style IDE connector myself for basically the same use. I figure I could deal with an 8 Gig / partition, and would just double face tape the CF card to the drive sled.
-nB

Re:Useful Size (1)

toddestan (632714) | more than 7 years ago | (#18326443)

Why not RAID0 two 8GB compact flash cards? You would end up with 16GB of fast flash storage with a convienent interface, and I don't think it would be any less reliable than a single mechanical HDD.

Has anyone actually done the math on this? (1)

Whuffo (1043790) | more than 7 years ago | (#18323647)

Let's see now - 2 million hours works out to about 228 years. Seems like a safe claim to make...

So Intel upping the rating to 5 million hours is meaningless. Somehow I suspect that the people at Intel know this...

Wait a minute.. (2, Informative)

aero2600-5 (797736) | more than 7 years ago | (#18323719)

"mean time between failure of 5 million hours"

Didn't we just recently learn that they're pulling these numbers out of their arse, and that they're essentially useless?

Disk failures in the real world: What does an MTTF of 1,000,000 hours mean to you? [usenix.org]

This was covered on Slashdot [slashdot.org] already.

If you're going to read Slashdot, at least fucking read it.

Aero

Re:Wait a minute.. (1)

greg1104 (461138) | more than 7 years ago | (#18323873)

This was covered on Slashdot already. If you're going to read Slashdot, at least fucking read it.

Maybe they were waiting until that story was accepted to Slashdot a second time before reading it.

MTTF != MTBF (1)

MrR0p3r (460183) | more than 7 years ago | (#18324201)

MTTF is not MTBF. In the world of metrics, they're different. While they both measure failures, time to fail and time between failures are different measurements for a reason, they tell us different things about the product we're testing.

Re:Wait a minute.. (0)

Anonymous Coward | more than 7 years ago | (#18324525)

Yeah, if you're going to post a story on Slashdot, be sure to have read and internalized the implications of every Slashdot story posted, ever, and not make any passing references to metrics completely, directly relevant to the story but which might be of questionable predictive value. Or be prepared to be condescended to by aero2600-5. Sheesh.

-TUAC

Seemed Inevitable... (1)

evilviper (135110) | more than 7 years ago | (#18323793)

It seemed pretty inevitable to me, that the Intel/IBM/AMDs of the world would branch out.

The generation-old fabs they abandon for CPU-making, are still a generation newer than what most anyone else has available. Repurposing those fabs to produce something like Flash chips, chipsets, etc. seems a pretty straight-forward and inexpensive way to keep making money on largely worthless facilities, even after the cost of retooling is taken into account.

Though they obviously haven't done it yet, companies like Intel have the manufacturing capabilities to leapfrog past all current Flash manufacturers, as far as density is concerned (though, personally, I'd say Flash density is fine, if the price can be driven down).

Re:Seemed Inevitable... (1)

daverabbitz (468967) | more than 7 years ago | (#18324349)

Intel have been making Flash for years (decades?). And their fabs aren't much if any better (in terms of scale) than those of Altera and Xilinx and probably Kingston, Samsung, Motorola and the rest.

You do know that 65nM FPGA's were on the market before 65nM processors. The reason is obvious, while Intal has to tool and tune a very complicated CPU to get decent yields, all a RAM/Flash/FPGA manufacturer has to do is tune the small amount of cookie cutter design, and ramp up production. As Ram/Flash/FPGA chips are very homogenous, the design is simpler and it is a lot easier to implement fusing to increase yields (at the expense of density).

What is new here is that they are selling a consumer flash (S)ATA device.

Re:Seemed Inevitable... (0)

Anonymous Coward | more than 7 years ago | (#18325335)

I work in the research department of one of the large semiconductor manufacturers you refer to. And I can say that you're basically talking out your ass. That's right. You have no idea wtf you are talking about.

High MTBF = Don't you worry about the MTBF. (1)

Skhaatra (904195) | more than 7 years ago | (#18325501)

With such a high figure what they are really saying is that there isn't much to break in there, unless you shove it in a fire or run over it with your car. So don't you worry about it.

Usually the MTBF will follow a bell curve (measured) and so there are bound to be a few failures within the warranty period due to manufacturing defects, but they should be small.

If you want to get paranoid about it you could always buy two of them and keep them contents in sync, then at least your MTTR (Mean Time To Repair) will be lower. Note that even with two of them the MTBF remains the same, so either one is still just as likely to fail, but you get time to get a replacement with no down time or loss of data.

Intel vs AMD (1)

bernywork (57298) | more than 7 years ago | (#18325661)

Just wondering, doesn't AMD make a whole bunch of money on Flash memory?

I know that they spun off the division to Spansion, which was a joint venture with Fujitsu, but if memory serves me correctly they still own a good section (40% or similar) of the company and make a lot of money out of it.

Conspiracy theories'R'us I guess. It could just be that Intel turned around and said "What do you mean AMD is making a heap of cash out of something that isn't as hard to make as CPUs and we aren't?"

Such praise for Intel... (0)

Anonymous Coward | more than 7 years ago | (#18326063)

I'm cautious in reading articles having to do with Intel on slashdot, now that they are sponsored by Intel and have an entire "Opinion Center" for Intel.

What does it "mean" anyway? (1)

obeythefist (719316) | more than 7 years ago | (#18326163)

Rudimentary statistics (IANAS)

The mean just tells us what you have if you get a sample and divide the sum of values in the sample by the sample size. It's one of the three more meaningful "averages" you can get in statistics. I'd be at least as interested in this case in seeing the mode and median.

You can "screw up" a mean by adding one or two samples that are extreme. These disks, say they have a 5 million MTBF as the figure you want, but they all really fail after 5 minutes of use. Problem, right? Wrong! You just get a a few units that are good for 1 or 2 billion years and throw them into the mix. Then your mean value skyrockets into millions! The median or mode averages won't suffer from the same distortion.

Of course if we are dealing with a reasonable, wishfully thinking a normally distributed sample, then of course I would like to know the variance and standard deviation for the sample. This will tell us if all the drives plug away for exactly 5 million years, or if they are just as likely to last 1 million years or 9 million years, or anywhere inbetween or even outside of that.

But all that extra information isn't provided to us. We just get the mean. On its own, mean doesn't mean much at all.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?