×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel's Braidwood Could Crush SSD Market

kdawson posted more than 4 years ago | from the if-only-they-could-make-it-rotate dept.

Data Storage 271

Lucas123 writes "Intel is planning to launch its native flash memory module, code named Braidwood, in the first or second quarter of 2010. The inexpensive NAND flash will reside directly on a computer's motherboard as cache for all I/O and it will offer performance increases and other benefits similar to that of adding a solid-state disk drive to the system. A new report states that by achieving SSD performance without the high cost, Braidwood will essentially erode the SSD market, which, ironically, includes Intel's two popular SSD models. 'Intel has got a very good [SSD] product. But, they view additional layers of NAND technology in PCs as inevitable. They don't think SSDs are likely to take over 100% of the PC market, but they do think Braidwood could find itself in 100% of PCs,' the report's author said."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

271 comments

Not so sure (5, Interesting)

mseeger (40923) | more than 4 years ago | (#29309633)

When given similar performance but a slightly higher price, i would prefer the SSD. I can't take the flash to the next PC as i can do with the SSD. Hard disks have a highe life expectancy than mainboards (i usually find some good use for old HDs, i never did for old mainboards). Unless the SSD will cost 2-3 times as much as the flash on the mainboard, i believe SSDs will still be used. But maybe this will lead to lower SSD prices.

Re:Not so sure (5, Insightful)

Dogtanian (588974) | more than 4 years ago | (#29309785)

I can't take the flash to the next PC as i can do with the SSD.

Not really a big deal; if it becomes commonplace, most PCs will eventually have it (or something like it) as standard anyway and you won't be bothered about it.

Re:Not so sure (-1, Offtopic)

Dogtanian (588974) | more than 4 years ago | (#29309881)

I see some idiot's been given mod points and is marking everything down as a troll... :-/

Re:Not so sure (5, Insightful)

Z00L00K (682162) | more than 4 years ago | (#29309903)

Whoever defined parent as troll must be weird.

That said - I'm more worrying about the consideration about exhausted flash on the motherboard. Have all avenues actually been considered here, or is that a built-in best before date that new motherboards will have?

Re:Not so sure (1)

ThePhilips (752041) | more than 4 years ago | (#29310119)

That's what worrying me too.

MLC still hasn't improved its lower durability bound: 100K erase cycles. (Often flash companies advertise only upper bound: 1-2.5Mln erase cycles. SLC has it at about 1Mln cycles.) And 100K erases for the flash - especially if one puts a FS's journal there - is really not that much since journal has to be updates often while the operation itself might not even reach the disk. E.g. on many systems the sequence will touch only FS journal: create temp file, work with it for a split of second, delete it. And that happens more often than some might think.

People on servers (and not only servers) were doing something similar with EXT3 for quite some time. EXT3 supports external journal and many were using CF cards attached over IDE as a journaling device. It would be interesting to hear about their experience in the context.

Re:Not so sure (1, Troll)

ErikZ (55491) | more than 4 years ago | (#29310491)

That's about 68 erases per sector PER DAY if I want my new SSD to last 4 years.

I 4 years I expect SSD tech to be much cheaper, faster, and able to contain far more data. I am not concerned.

I'd also use these on servers, barring a DB server or some kind of caching server.

Re:Not so sure (2, Insightful)

Hognoxious (631665) | more than 4 years ago | (#29310667)

That's about 68 erases per sector PER DAY if I want my new SSD to last 4 years.

TFA says it'll be used for an I/O cache, so I supsect it'll get hit slightly more often than that.

Re:Not so sure (-1, Troll)

Anonymous Coward | more than 4 years ago | (#29310757)

You're a moron and a math fail.

Re:Not so sure (4, Informative)

vidarh (309115) | more than 4 years ago | (#29310231)

It's *cache*. It's not meant to be moved, and it doesn't prevent you from moving the hard drive. Nor does it prevent you from using an SSD, it just means the performance reasons for using an SSD may get significantly reduced.

Re:Not so sure (2, Interesting)

gmuslera (3436) | more than 4 years ago | (#29310387)

What if you take it as "cache", one that survives reboots, but where if you really want data persistence you backup it to a more transportable device? Probably will be pretty fast regarding speed (maybe faster than normal ssds, at least regarding bus connection), and having i.e. the most requested files, database slaves for fast queries, swap/temp partitions or even the OS could improve a lot typical pc performance.

Re:Not so sure (1)

asdf7890 (1518587) | more than 4 years ago | (#29310755)

What if you take it as "cache", one that survives reboots,

The big thing I see here isn't surviving intentional reboots for efficiency - i.e. stuff cached pre-boot woudl still be available without spinning-disk read post boot. For that matter I'd be wary of such a feature (it would need to be well implemented and very well tested to deal with odd circumstances like disk connections being rearranged physically between shutdown and restart).

The two big advantages here are standard cache/buffer behaviour during active system use, and written data surviving an unexpected power outage so it can be written when power returns, reducing the chance of data corruption in such circumstances (many high-spec RAID controllers use battery-backed DRAM for this sort of thing).

why flash? (1, Interesting)

gbjbaanb (229885) | more than 4 years ago | (#29309637)

I mean, why not put cheaper DDR RAM on the motherboard, with a big capacitor or battery to allow it to flush all writes out to disc when the power stops?

In fact, why bother putting ram as an IO cache when you could add the RAM to the motherboard anyway and allow the OS to cache writes. Intel - stop thinking like this, and just hand out free 1GB DRAM sticks with every motherboard, job solved.

Re:why flash? (5, Informative)

Diabolus Advocatus (1067604) | more than 4 years ago | (#29309693)

RTFA. It's cheaper than DRAM.

Re:why flash? (1)

postbigbang (761081) | more than 4 years ago | (#29309921)

It might be cheaper, but consider that flash has an upper end write limit. No one's tested that adequately yet. It could brick your motherboard, should the number of writes exceed an upper end limit of flash-- which has them and DRAM does not.

Re:why flash? (1, Funny)

Anonymous Coward | more than 4 years ago | (#29310017)

You know, I bet you are right on the money. I bet no one at Intel, this is Intel we are talking about here, has thought of that. Of the probably triple digit engineers they have working on projects related to this, and these are top level graduated-first-in-their-class engineers, I bet not a single one of them has though to test something as obvious as this.

Thanks postbigbang, you've saved computing once again!

Re:why flash? (0)

Anonymous Coward | more than 4 years ago | (#29310217)

Graceful fallback tends to be the _first_ thing written for improvements to hardware (when it is applicable), because often the software-end needs to be written before the hardware is actually available to the developers- you can do a lot of work if you have the "and when that doesn't work, do this instead" codepath written.

Re:why flash? (3, Insightful)

DigiShaman (671371) | more than 4 years ago | (#29310311)

Well hopefully, there will be a BIOS option to disable this hardware in case a failure shows up. Better yet, have them removable much like the old COAST (Cache On A STick) modules of the first gen Pentium days.

Re:why flash? (1, Insightful)

imgod2u (812837) | more than 4 years ago | (#29309755)

Your OS doesn't always have time to shut down properly. Don't think anyone's fond of the idea of having their last couple of saves go poof because Windows crashed.

Intel's SSD drives already have 32MB of DRAM. But it's not used to buffer data because of reliability issues.

Re:why flash? (1)

pmontra (738736) | more than 4 years ago | (#29309899)

No matter how many layers you add or remove, there will always be a chance of data losses when the OS crashes (Win, Mac, Linux, anything) because there is a finite time before changed data are permanently stored even on this new SSD menory. Furthermore that time can be quite large depending on the OS and file system design.

Anyway, adding one more layer adds one more point of failure, so these new machines could be faster but also for sure a little less reliable than what we have now. What happens when the Braidwood SLC NAND cells starts failing? Will it fail silently or bring down the system like an HD failure? Can I replace it or just throw it away or should I replace the whole motherboard?

Re:why flash? (0)

afidel (530433) | more than 4 years ago | (#29309997)

Uh, if you turn off write caching to the drive in Windows a crash means I/O stops when the OS stops sending bits to the I/O driver, not 30 seconds before or something like that. Modern SLC is reliable enough that you would have to be pushing Terrabytes per day for years to wear them out, if you are doing that then I think you will know how to disable this feature =)

Re:why flash? (1)

agnosticnixie (1481609) | more than 4 years ago | (#29310355)

And even MLC will outlast the average laptop, or even the average slashdotter's clunker, unless they're using it as a portable heavy-duty server (in which case they deserve it as much as they deserve it if they were using regular HDD on a heavy-duty server instead of more robust stuff).

fsync(fileno(fp)); (1)

tepples (727027) | more than 4 years ago | (#29310015)

there is a finite time before changed data are permanently stored even on this new SSD menory. Furthermore that time can be quite large depending on the OS and file system design.

If you flush and sync [opengroup.org] the file in the thread that writes the file, you can be sure that "[t]he fsync() function does not return until the system has completed [writing data] or until an error is detected." By "that time", do you refer to the time that the program blocks on fsync()?

Re:fsync(fileno(fp)); (2, Insightful)

pmontra (738736) | more than 4 years ago | (#29310519)

I'd answered yes, but one doesn't control the fsync behavior of every application running on his/her system and the OS/file system can take a lot of time (even tens of seconds or more) before deciding to commit changes to the hard disk. Furthermore, a fsync may take seconds to complete and disaster can strike at any time.

There was quite a commotion about those matters when somebody filed a data loss bug [launchpad.net] against the new Linux ext4 file system in January 2009. It turned out that ext3 commits changes at least every 5 seconds and ext4 does it less often. Some applications that got lucky with Linux crashes on ext3 exposed their poor design when running on ext4. Comments #45 and #54 in the linked page are quite explanatory.

By the way that was a sloppy application coding problem (if you want your data safe on HD you fsync and wait as long as it takes to write them down) but they eventually issued some patches to the file system code to mitigate it.

Braidwood is supposed to speed up fsync (1)

tepples (727027) | more than 4 years ago | (#29310727)

one doesn't control the fsync behavior of every application running on his/her system

Users can choose applications that properly sync, or in the case of free software, they can put fflush(fp); fsync(fileno(fp)); in strategic places and recompile. Braidwood, which appears to implement a nonvolatile ring buffer for writes, should make it less painful for application developers to sync when appropriate.

By the way that was a sloppy application coding problem

That, and the fact that fsync() doesn't provide any sort of reasonable performance guarantees. Some of the slowdowns of Firefox 3.x on netbooks are due to an SQLite COMMIT (which calls fsync()) taking several seconds to flush writes [off.net] to a cheap HDD or cheap SSD. That's one thing this Braidwood chip is intended to correct.

Re:why flash? (2, Interesting)

Menchi (677927) | more than 4 years ago | (#29310013)

> Your OS doesn't always have time to shut down properly. Don't think anyone's fond of the idea of having their last couple of saves go poof because Windows crashed.

So, what happens if my PC crashes because of some hardware failure and I have to plug in a different HDD for some reason? Or plug the HDD into a different mainboard? All the things I thought I wrote to the disk will be gone. In fact, the file system might be inconsistent if this thing doesn't honor flush requests. But if it does honor flush requests then nothing is gained, it'll still be the OS that does all the caching.

Well, it'll still be a great read cache, 4-16GB read cache is more that most people have as RAM caches, so it'll be good for something.

Re:why flash? (4, Informative)

erple2 (965161) | more than 4 years ago | (#29309847)

First of all, DDR RAM is not cheap (at least, not compared to NAND RAM). It costs significantly more per gigabyte than even the most expensive of Intel's offerings for SSD's. While it should provide more theoretical throughput than any SSD, benchmarks at various places (http://techreport.com/articles.x/16255/1) haven't shown that to be significant yet, at least from the end user perspective (some synthetic benchmarks show that the RAM based disks can be faster than SSD's, but translating that to real world usage scenarios by consumers doesn't quite show any tangible benefits).

DDR RAM uses up a very large amount of power per stick compared to SSD's do. I remember seeing the power consumption of one of the DDR2 based "volatile hard drives", and it was higher than spinning drives (at least at idle), and wasn't particularly faster than the best of intel's SSD's.

So sounds like DDR RAM on board is expensive, power hungry, and doesn't provide that much of a tangible benefit to consumers. Tell me again why it's a good idea?

Re:why flash? (3, Insightful)

Profane MuthaFucka (574406) | more than 4 years ago | (#29309909)

Exactly. I already have a disk cache. This solution is redundant. Also, this solution doesn't get me away from the mechanical spinning noisy hot slow thing which fails too often.

Re:why flash? (1)

drsmithy (35869) | more than 4 years ago | (#29310027)

Exactly. I already have a disk cache. This solution is redundant.

How many caches are there between your CPU and RAM ?

Re:why flash? (5, Funny)

Anonymous Coward | more than 4 years ago | (#29310293)

Stop using a typewriter to post on Slashdot.

Signed, everyone.

The writing's on the wall. (5, Insightful)

jcr (53032) | more than 4 years ago | (#29309643)

Sooner or later, no moving parts beats moving parts. The magnetic disk makers have done an amazing job so far, but eventually they're going to lose out to solid-state.

-jcr

Re:The writing's on the wall. (2)

Diabolus Advocatus (1067604) | more than 4 years ago | (#29309715)

Capacity is still an issue though. Although in enterprise storage SSDs offer a lower cost per transaction and provide a real benefit, they still have massive amounts of HDDs for storage on the lower tier. Outside of work where I would be classed as a standard consumer, it would cost me far, far too much to buy enough SSDs to transfer my 4TB of data from my HDDs.

Re:The writing's on the wall. (5, Insightful)

Anonymous Coward | more than 4 years ago | (#29309739)

Just goes to show how warped a professional's perspective really is. Standard consumer with 4TB of data? Really?

Re:The writing's on the wall. (2, Insightful)

kimvette (919543) | more than 4 years ago | (#29309843)

The RIAA and MPAA would have you believe that each man, woman, and child has downloaded at least that much illegal movies and music.

Re:The writing's on the wall. (4, Insightful)

camperdave (969942) | more than 4 years ago | (#29309915)

Consumer electronics store shelves are packed with terabyte sized hard drives. 4TB may be a little ahead of the curve, but not by much.

Re:The writing's on the wall. (2, Informative)

dogmatixpsych (786818) | more than 4 years ago | (#29310285)

No, actually having 4TB of data is way ahead of the curve. The "standard consumer" has maybe 100GB worth of (non-OS) data on a drive, even if the drive is 1TB.

Re:The writing's on the wall. (1)

gclef (96311) | more than 4 years ago | (#29310433)

I think people are going to have a lot more than that when recording HDTV with a Tivo-alike device. 1TB works out to about 100-ish hours (yes, I'm rounding heavily) of HD video. Tivo certainly has users who record & keep that much video.

Re:The writing's on the wall. (1)

silanea (1241518) | more than 4 years ago | (#29310093)

A friend of mine studies architecture. He stores several TB in DSLR photos and renderings on his desktop machine. Another friend of mine stores all his audio CDs, DVDs and BluRays and lots of TV recordings on a little server for his HTPC, he recently reached 3TB.

They are not "standard consumers", but they are not hard-core nerds, either. Storage is so cheap to acquire (and so easy to use) that people can afford to not delete anything ever again. Whether that is sensible is a whole other point. But the result is that more people store more stuff.

Re:The writing's on the wall. (0)

Anonymous Coward | more than 4 years ago | (#29310361)

Over the next decade (perhaps half of that), 4T WILL be the norm. The reason is that more and more people expect to keep all their emails, their pix, their documents. To top that off, people will be downloading and saving books on the system. And yes, they will prefer to save it on their system; Think gmail over the last few days ( People expect that kind of outage of a Windows system, but Google is not MS).

Re:The writing's on the wall. (2, Informative)

linzeal (197905) | more than 4 years ago | (#29310411)

Everyone I know has at least a few TB of data on burnt CDs and DVDs. It would be nice to be able to consolidate your multimedia stuff into one storage device. I'm running 8 terrabytes of data on 10 1TB hard drives on a ATA over ethernet [sourceforge.net] setup [freshmeat.net] in raid6. So yeah I'm pry not your average consumer but being able to access 1000's of hours of movies, tv and home video without having to pay netflix or watch the ads on hulu is pretty nice.

Re:The writing's on the wall. (1)

Dogtanian (588974) | more than 4 years ago | (#29309833)

Capacity is still an issue though. [..] it would cost me far, far too much to buy enough SSDs to transfer my 4TB of data from my HDDs.

Go back and read what he said. It's clear that he was talking about the near to middle future, not the current situation:-

Sooner or later, no moving parts beats moving parts. The magnetic disk makers have done an amazing job so far, but eventually they're going to lose out to solid-state.

Flash memory is at present growing in capacity much faster than magnetic drives. (Actually, it's growing at the rate that the latter grew at during the 1990s and early 2000s). Of course, it's still got a long way to go to catch up, and- like hard drives- it's not guaranteed that it'll keep that rate of growth forever. Still, the current shape of solid state's curve has it intersecting that of magnetic platters in the not so distant future.

Re:The writing's on the wall. (1)

beelsebob (529313) | more than 4 years ago | (#29310289)

I don't think SSDs do have a long way to go to catch up. SSD - 512GB in a 2.5" box, Hard Disk, 640GiB in a 2.5" box, that's only a small difference, and until last week, the SSD was in the lead. The only reason SSDs trail in the 3.5" drive stakes is the same reason they trail on price: not enough factories have been built yet, fix that (probably won't take long), and SSDs will very rapidly be competing with HDDs capacity wise

Re:The writing's on the wall. (0)

Anonymous Coward | more than 4 years ago | (#29310403)

Moving parts wear out - but isn't the same true of the memory cells in Flash memory? As in "up to 1,000,000 write-erase cycles".

I'd think they'd wear out very quickly if someone was crazy enough to use Flash as a cache! But maybe that's a different sort of Flash?

Re:The writing's on the wall. (1)

Kjella (173770) | more than 4 years ago | (#29310571)

Perhaps. But they still have a long way to go on $/GB. Just checking my local price guide it's 0.09 $/GB for 1.5TB HDDs and 3.75 $/GB for the cheapest SSD. But yeah, booting off SSD and having a HDD for media sure.

Tag the article 'idiot' (-1, Flamebait)

Anonymous Coward | more than 4 years ago | (#29309645)

Enough said.

Ohh - maybe they could take it to the next step... (4, Interesting)

IcephishCR (7031) | more than 4 years ago | (#29309667)

Now only if they could start following the server side folks and place an internal USB connector inside and then MS and others could give us the OS on its own usb drive (read only) and we could use the hard drive for updates and programs we could enhance the security as well...

Re:Ohh - maybe they could take it to the next step (1)

jcr (53032) | more than 4 years ago | (#29309677)

I really don't want to contemplate having my boot volume on a USB device.

-jcr

Re:Ohh - maybe they could take it to the next step (1)

drinkypoo (153816) | more than 4 years ago | (#29310271)

This used to be a huge PITA before GRUB supported UUIDs as groot values. It was an even bigger one before Linux would do it. On Windows you need special tricks, because Windows doesn't like to be installed there; it works on some netbooks with "special" BIOS. I think the specialness is at least partly from their EFIness but I'm just kind of firing in the dark here. I have a 4G Surf and an Aspire One, both will allegedly play this trick. Actually, I have a DT Research DT366 which seems to have some sort of USB disk emulation mode also.

I have used a SDHC in SD adapter (16GB Sandisk) as root for Linux on both the above systems and aside from some slowness (~5 MB/sec, but the 5200rpm disk in these systems is pretty damned slow anyway) it works great. I used XFS as the root, it plays well with such devices as far as journaling filesystems go. So this is actually a flash card in a multi adapter on the USB bus in both cases, and worked great. I have also installed a 4GB Sandisk flash into the internal USB port on an add-in card in my desktop system and used it for ReadyBoost, but that system now runs Jaunty. Having more bootable devices increases boot time, so I just boot from the disk. It would be easy to mirror /boot to flash but there's little to be gained speedwise by doing so little. If you union mounted a flash volume with files needed at boot over the actual boot volume... eh, now I'm just being silly.

Re:Ohh - maybe they could take it to the next step (1)

drsmithy (35869) | more than 4 years ago | (#29310501)

On Windows you need special tricks, because Windows doesn't like to be installed there; it works on some netbooks with "special" BIOS. I think the specialness is at least partly from their EFIness but I'm just kind of firing in the dark here. I have a 4G Surf and an Aspire One, both will allegedly play this trick. Actually, I have a DT Research DT366 which seems to have some sort of USB disk emulation mode also.

Basically, it just needs to appear to the OS as a "fixed disk" rather than a "removable disk".

Re:Ohh - maybe they could take it to the next step (5, Interesting)

zrq (794138) | more than 4 years ago | (#29310105)

Why a USB connector ? That causes the same problem as making SSD cards use the SATA interface - the serial interface becomes slower than the things it is connected to.

What I would like to see is a set of sockets on the motherboard, mapped into the main memory address space (not PCI), a physical switch on the board to make them read only and software in the BIOS to make them look like a bootable disk.

Four sockets with 16 or 32G in each would give you enough space to store the entire OS. I don't know how Windows would handle it, but in a Unix or Linux based system it would be fairly easy to mount the devices as read only partitions and map them into the filesystem. This would be ideal for a server system, mapping the entire OS into the main memory address space and making it read only.

In fact all the BIOS would need to do is make the first 100M visible as a boot partition, and leave the OS to handle the rest.

Re:Ohh - maybe they could take it to the next step (1)

IcephishCR (7031) | more than 4 years ago | (#29310509)

Why would you need 4x16G for the OS - Vista doesn't even take up that much...I was thinking of starting with around 8G for Windows and uses a union FS to map Windows updates over the original files.

No user programs here - interface-wise I wouldn't care that much - USB is everywhere and slow, but drop in SD or compactflash and I'm ok. More of purchase an OS on flash drop it in and nothing can touch it - if I need to wipe no big deal just rebuild the config files and clean out user data, and no more friggin programs writing files into c:\windows...

Re:Ohh - maybe they could take it to the next step (1)

IcephishCR (7031) | more than 4 years ago | (#29310535)

I'm not married to USB CF or SD would work - something semi-portable would be for the best - as I would like to see the OS and OS only on the flash memory - maybe a couple gigs of that space wwould be writable for config files...

Think boot/OS disk...I'm all for speed, its just that USB is everywhere.

Re:Ohh - maybe they could take it to the next step (1)

Abcd1234 (188840) | more than 4 years ago | (#29310593)

What I would like to see is a set of sockets on the motherboard, mapped into the main memory address space (not PCI), a physical switch on the board to make them read only and software in the BIOS to make them look like a bootable disk.

The one issue, here, is one of address space. Unless people do a wholesale migration to 64-bit, it won't be possible to simply map the address space of such a device into memory.

So when I drop my laptop, the NAND saves my HDD? (4, Funny)

Rogerborg (306625) | more than 4 years ago | (#29309675)

What does it do, scream "Nooooooooo!" and throw itself underneath the hard drive in slow motion?

HW buffer for drives (2, Interesting)

Keruo (771880) | more than 4 years ago | (#29309681)

Sounds like a good plan. Throw cheap battery backed memory, 4-16Gb onboard to act as a transparent buffer between harddrive(s) and system.
Fast IO is ensured as most operations happen in memory, and dataloss isn't an issue as the memory is battery backed.
RAID cards have done this for ages, but it's becoming real option for desktops as memory price keeps declining.
16Gb might be overkill for most purposes, you could get away with 2 if the system is used only for low-power tasks like surfing and email.

Re:HW buffer for drives (0)

Anonymous Coward | more than 4 years ago | (#29309709)

You can "get away" with zero. We do now.

Re:HW buffer for drives (3, Insightful)

natehoy (1608657) | more than 4 years ago | (#29309747)

I agree, but why would Intel want to use flash memory for this? RAM is faster, has the capability of a LOT more read/write cycles, and could be backed up by a small battery in the case of short power outages (or maybe a battery big enough to run the hard drive long enough to flush the write buffer, as others have said).

This is essentially a cache, which means it's going to get a lot of reads and writes. Under those circumstances, the flash memory's going to wear out relatively quickly and unless it's easily replaceable it means everyone's going to need to buy new motherboards every year. How could forcing people to replace motherboards annually possibly benefit Intel? Oh, wait...

Re:HW buffer for drives (2, Insightful)

Truekaiser (724672) | more than 4 years ago | (#29309945)

It's called planed obsolescence. Due to the amount of read/write cycles on the i/o and the fact all flash memory is limited to a certain number. If they integrate this into the motherboard it means that the motherboard has a expiration date they can predict and design around. In such a situation the flash will mostly last about a year or 2.

Speed has nothing to do with it because your /still/ bound to the data flush to the disk drive which will be much slower. data security between crashes seems to only be a side benefit.

Re:HW buffer for drives (1)

drsmithy (35869) | more than 4 years ago | (#29310075)

I agree, but why would Intel want to use flash memory for this?

Because, according to TFA, it's 1/4 the price of DRAM.

This is essentially a cache, which means it's going to get a lot of reads and writes. Under those circumstances, the flash memory's going to wear out relatively quickly and unless it's easily replaceable it means everyone's going to need to buy new motherboards every year.The SLC flash they're talking about will almost certainly last longer than the hard drives it is caching.

Re:HW buffer for drives (1)

timeOday (582209) | more than 4 years ago | (#29310703)

This is essentially a cache, which means it's going to get a lot of reads and writes.

No it doesn't mean that. It's a disk cache, not a memory cache. Meaning, only file operations will hit it. The number of writes will be just the same as on SSD drives which millions of people already have.

It won't replace SSD drives anyways. Bigger cache dosn't help much at all after a point, and RAM is getting so cheap most systems have plenty of file caching. What the SSD drive gives you is near-instant access to anything on your drive, and a cache can't do that. There will always be enough unpredictable reads that the mechanical drive would still have to be there, clattering away. I'm not going back, not on a laptop anyways.

Re:HW buffer for drives (2, Informative)

erple2 (965161) | more than 4 years ago | (#29309931)

Sounds like a good plan. Throw cheap battery backed memory, 4-16Gb onboard to act as a transparent buffer between harddrive(s) and system.

Do you mean gigabit or gigabyte? Also, 16 gigabytes of RAM right now isn't very cheap at all. The cheapest DDR2 memory I've seen is about 12.50 dollars per gigabyte, so that's an additional 200 dollars per 16 gigabytes. Is that a good price to pay for some potential increase in speed? IMO, that's what I'd call "extremely hard to justify" for a consumer.

RAID cards have done this for ages, but it's becoming real option for desktops as memory price keeps declining.

Meh, even the most expensive RAID cards loaded up with tons of RAM aren't as fast as a couple of Intel SSD's right now, so why bother with the expense?

Re:HW buffer for drives (1)

hattig (47930) | more than 4 years ago | (#29309955)

Flash memory doesn't require a battery backup.

The main issue with this is that SSDs gain speed by being massively parallel, across 8 - 32 flash chips.

This is a single flash chip, so to get that performance improvement, all of the parallelism has to be internal. You're talking about a flash chip that also has this complex controller and PCIe (I presume) interface on-board.

Useful for netbooks which can use it as their only layer of storage. Useful for OS installs (I guess technically it's a cache rather than a separate drive in this situation, so the OS will need to manage the storage entirely, the bootloader would need a driver if the kernel was stored on it, and so on, etc.

Bullshit (0)

A beautiful mind (821714) | more than 4 years ago | (#29309687)

The article is bullshit! Random I/O is essentially uncacheable. I don't think the onboard cache will be 128-256GB in size, so tyvm but I'll stick to using a ssd.

The important performance aspects of the SSD is: random read, random write, sequential read, sequential write in this order.

Re:Bullshit (4, Insightful)

jcr (53032) | more than 4 years ago | (#29309707)

Random I/O is essentially uncacheable.

I'm sure that would come as a great surprise to anyone who ever implemented a virtual memory system.

-jcr

Re:Bullshit (1)

A beautiful mind (821714) | more than 4 years ago | (#29309771)

Not really. I don't doubt that Braidwood would increase performance since I/O be it memory or disk is not entirely random - there are parts of the memory and disk more frequently accessed therefor cacheable. Oh btw, by cacheable I mean "non-marginal" performance benefit for using the cache. Even with random I/O, having a cache can increase performance a bit, but only very slightly.

Memory is usually cacheable a lot better than information on the disk however. Caching the operating system related files and executables on something like Braidwood would speed up things a lot, but it would be useless for any other area where high performance is needed, like gaming or working with larger files in general.

Re:Bullshit (1)

shic (309152) | more than 4 years ago | (#29309857)

Random I/O is essentially uncacheable.

I'm sure that would come as a great surprise to anyone who ever implemented a virtual memory system.

You should assume the word "Random" is in bold-type, then the claim makes more sense.

Virtual memory systems only work effectively to the extent that data access has 'locality of reference' - which is often, but not always, found in practice.

To my mind, the real promise of solid-state is the random access. Since the earliest DP, software has had to take into account the sequential nature of access to durable storage - disk based storage never did have a uniform access time for blocks - and this has influenced everything from file-system design to memory architectures. In an SSD environment, it becomes possible to accurately model performance at higher levels of logical abstraction - and, in my view, better systems should emerge as a consequence... assuming, of course, the world at large doesn't do something crazy - such as always access flash through a FAT file-system... LOL!

Re:Bullshit (2, Insightful)

Dogtanian (588974) | more than 4 years ago | (#29309865)

Random I/O is essentially uncacheable.

I'm sure that would come as a great surprise to anyone who ever implemented a virtual memory system.

His flaw lies in assuming- or implying- that most I/O *is* random.

Re:Bullshit (2, Insightful)

A beautiful mind (821714) | more than 4 years ago | (#29309973)

It's more the case of hedging characteristics against each other.

1. SSDs handle random I/O extremely well compared to traditional harddisks.
2. Braidwood is essentially a small, cheap, 8-16GB flash based cache.
3. If Braidwood is transparent to the OS, it will have a hard time guessing what to put in the cache, because a lot of the I/O on a desktop/laptop is random, but the issue with caching the non-random part is that most OSs do caching themselves for frequently accessed parts of the disk. This means that for a transparent caching solution like that it is very hard to tell the difference between a frequently accessed piece of executable data and random I/O, since in both cases, it only gets accessed once per startup/shutdown cycle, for frequently accessed stuff it is already cached in memory, for random I/O, it is simply never requested again for a long time. So to make this caching work the flash thing either needs OS level support or very sophisticated statistics collection specifically tuned for keeping track of patterns across reboots and providing a caching solution for startup basically.

Re:Bullshit (5, Insightful)

John_Booty (149925) | more than 4 years ago | (#29309911)

Random I/O is essentially uncacheable.

I'm sure that would come as a great surprise to anyone who ever implemented a virtual memory system.

-jcr

You're both right.

The problem here is that "random I/O" can have at least two subtly different meanings. In the very old days they talked about random I/O as opposed to sequential (ie, tape) I/O. In that sense, yes, random I/O is often extremely cacheable, as you say. That's why virtual memory works, as system files, drivers, commonly-used applications, and so forth are accessed much more often than other daa.

"Random I/O" can also refer to I/O that does not follow any real pattern - ie, a 50GB database in which all records are accessed about equally as often. This kind of I/O is not really cacheable, practically speaking. Unless you can cache the entire thing.

What's the correct terminology for the second kind of random I/O? Random I/O with very low locality?

Re:Bullshit (1)

A beautiful mind (821714) | more than 4 years ago | (#29310011)

What's the correct terminology for the second kind of random I/O? Random I/O with very low locality?

Random I/O. The first is called non-sequential I/O.

Re:Bullshit (1)

silas_moeckel (234313) | more than 4 years ago | (#29310317)

Random I/O is hard to read cache it is very write cache friendly. Modern systems already have huge a huge read cache called all unused memory. A huge nonvolatile write cache can do wonders for random write I/O. Databases and the like can often write the same block multiple times in succession over a long period only the most recent needs to be written to disk. I/O can be reordered to be more sequential helping seek times (yes drives already do this but with 32 megs vs gigs of flash), take this further and the flash could literally write all pending changes from the beginning to the end of the drive and then start again making all drive write IO sequential (Baring internal drive sector remapping, of a need to read uncached data).

The D in ACID (1)

tepples (727027) | more than 4 years ago | (#29310613)

Databases and the like can often write the same block multiple times in succession over a long period only the most recent needs to be written to disk.

But each write still needs to be written to a non-volatile journal [wikipedia.org]; otherwise, the database is not durable [wikipedia.org].

Re:Bullshit (0)

Anonymous Coward | more than 4 years ago | (#29309797)

Reread the article, this isn't a small cache, it's having a better tech than SSD built into the mobo. Big boxen (not intel toys) have had this kind of thing for decaded. Memory and storage are treated as a storage pool regardless of whether it's RAM or discs. Finally we're getting this into the world of PCs.

Re:Bullshit (1)

drsmithy (35869) | more than 4 years ago | (#29310291)

Random I/O is essentially uncacheable.

I think you mean "unpredictable" here, not "random".

Is Braidwood already canceled? (5, Informative)

bboy_doodles (170264) | more than 4 years ago | (#29309723)

There have also been rumors, however, that Braidwood has been canceled, at least in the near term:
http://www.dvhardware.net/article37368.html [dvhardware.net]

Re:Is Braidwood already canceled? (5, Informative)

Colonel Korn (1258968) | more than 4 years ago | (#29310081)

There have also been rumors, however, that Braidwood has been canceled, at least in the near term:
http://www.dvhardware.net/article37368.html [dvhardware.net]

I read another report (maybe at Anandtech) of the same thing earlier this week. It was a sidenote in a motherboard preview claiming that Intel removed it after it showed no meaningful performance advantage in real use, unlike an SSD.

How about the reliability ? (2, Insightful)

BESTouff (531293) | more than 4 years ago | (#29309727)

If the onboard flash is a cache, that means it will be used frequently do it will wear faster. Won't that mean you're more likely to corrupt your data, even if your HD is still good ?

Re:How about the reliability ? (5, Informative)

John_Booty (149925) | more than 4 years ago | (#29309953)

SLC flash memory, which the article claims Braidwood will use, is an order of magnitude or two more durable (in terms of write cycles) than MLC flash memory, which is what is used in most consumer-level devices like Intel's X-25M SSDs.

Wear-leveling and overprovisioning should ensure a long life for the memory used in a scheme like Braidwood. Intel, generally speaking, knows what they're doing in this area. Now if only I could afford one of their drives...

Re:How about the reliability ? (2, Interesting)

jcaplan (56979) | more than 4 years ago | (#29309977)

No. When flash fails it becomes unwritable, not unreadable. Your data is safe, your capacity declines.

Stupid (1, Redundant)

miffo.swe (547642) | more than 4 years ago | (#29309823)

If intel has made an SSD that dont wear out ffs sell them to me now. If not, then stuff that cache somewhere the sun dont shine and come up with a better solution.

dedicated hardware for swap partition (0)

Anonymous Coward | more than 4 years ago | (#29309839)

finally!
if the competition moves more things to dedicated on-board hardware, that will be fun for the rest of us!
Looking forward to /dev/hdbraidwood

On-Drive NAND also quite likely (4, Interesting)

MasterOfGoingFaster (922862) | more than 4 years ago | (#29309875)

Funny - this very thing was being discussed around 1985 (I think), but using battery-backed RAM as a way to reduce boot time. The thinking was people wouldn't put up with a computer that took 30 seconds to start, and if we didn't have a 2-5 second boot time (equal to a TV), the personal computer would never fly. But since it took from 1985 (80386 chip) to 1995 (Windows 95) for a 32-bit OS to become popular, maybe 25 years is reasonable.

Or not. Man, this industry moves at a snails pace in a lot of areas. Why do we still live with the x86 instruction set. Is "the year of UNIX" here yet?

Anyway, three competitors will emerge:

- Someone will put NAND directly on the drive, and get an instant speed improvement. All the tech sites will rave about it and it will be an instant must-have item.

- Their competitor will figure out a way to put the OS files in NAND, for fast booting, via a utility or firmware. The marketing war begins.

- The third competitor will work with Microsoft or Apple to get OS support for fast boot. Apple will get there first and you'll see a commercial on TV with the Mac guy wondering why the PC guy takes the entire commercial to wake up.

In a single drive system, the cost will be about the same. Doing it on the drive will create an instant performance boost on any machine, and well worth the estimated $10 added cost.

Re:On-Drive NAND also quite likely (1)

MobyDisk (75490) | more than 4 years ago | (#29310303)

- Someone will put NAND directly on the drive, and get an instant speed improvement. All the tech sites will rave about it and it will be an instant must-have item.

Several manufacturers did this, but it didn't offer much benefit over the existing DRAM caches that are on the drives. Further evidence of this is that Microsoft's ReadyBoost [wikipedia.org] does this, and provides no major benefit. Bottom line: Just get more RAM in your machine, or buy a drive with a bigger cache.

- Their competitor will figure out a way to put the OS files in NAND, for fast booting, via a utility or firmware. The marketing war begins.

Already covered. Windows XP and above create a Prefetch folder that the files needed during bootup, in a nice contiguous block. Once you do that, putting it into NAND doesn't matter since seek time becomes mostly irrelevant, and NAND doesn't offer higher continuous read speeds than a good platter hard drive.

Plus: Boot time isn't entirely I/O bound. I am sitting next to two identical Windows XP embedded systems: One with an SLC NAND flash hard drive, the other with a 7200RPM disk: The disk boots about 3 seconds faster.

Memristors? (0)

Anonymous Coward | more than 4 years ago | (#29310051)

Memristors?

Crush? (1)

2meen (728316) | more than 4 years ago | (#29310141)

From TFA: "Braidwood, which is expected to offer anywhere from 4GB to 16GB capacity, ..." - In what way would it even compete with the SSD market? I'll stick to my separate 250 gig SSD drives for a while longer methinks.

Re:Crush? (1)

chrysrobyn (106763) | more than 4 years ago | (#29310353)

From TFA: "Braidwood, which is expected to offer anywhere from 4GB to 16GB capacity, ..." - In what way would it even compete with the SSD market? I'll stick to my separate 250 gig SSD drives for a while longer methinks.

You lack imagination. Consider a case where terabyte SSDs are more than $500, but spinning terabyte spinning media is less than $100. If there was a 16 gig cache sitting on the main board that would provide SSD write speeds at all times, and SSD read speeds for most things (in a consumer app, I think "most" works, in a database, not so much), and it cost significantly less than $400, I can easily see how it would erode the SSD market.

As an aside, any SSD that could sit in a lower latency, higher bandwidth place than at the end of a SATA cable starts to look pretty interesting. What if I could buy a Thinkpad that had no upgradable hard drive, but had 150 gigs of SSD on board that responded faster than 6GB/s and with lower latency than any current media... I think I wouldn't resent the "not upgradable".

Re:Crush? (1)

drsmithy (35869) | more than 4 years ago | (#29310555)

In what way would it even compete with the SSD market?

When it offers equivalent benefits for a fraction of the cost.

Fail to see... (1)

WRX SKy (1118003) | more than 4 years ago | (#29310147)

I RTFA and still fail to see how this will "erode the SSD market". For example, I have a drive with 100+ GB of data on it (music, photos, documents, source code, backups)... my interactions with that drive are completely random and unpredictable. Yes, you COULD cache the entire source code folder when I start up my IDE, but what if I want to listen to some music on shuffle while I'm doing that, or add an image file from the photo directory to the source code... there goes the caching strategy.

Too expensive (1)

GreatBunzinni (642500) | more than 4 years ago | (#29310161)

From the article:

Braidwood, which is expected to offer anywhere from 4GB to 16GB capacity, will only raise the cost of a PC by about $10 to $20 per system, according to Jim Handy, the Objective Analysis analyst who authored the report.

When comparing that cost increase with the overall cost of a brand new PC it doesn't raise any red flag. Nonetheless, what it must be said is that, as this brainwood technology "resides directly on the motherboard" (i.e., it's yet another component embedded in a motherboard) , this technology will increase all motheboard cost by $10 to $20. That means that this brainwood technology is an excuse to ramp up current motherboard prices [newegg.com] from around 20%.

Call me old fashioned but I prefer my hardware cheap without any unnecessary bells ans whistles. In fact, is this technology even capable of doing what the marketing blurb states it does? Nowadays it's hard to purchase a HD with capacity less than 200GB. Is a 4GB buffer really capable of successfully buffering all that data?

Re:Too expensive (1)

drsmithy (35869) | more than 4 years ago | (#29310635)

Is a 4GB buffer really capable of successfully buffering all that data?

Run some benchmarks before and after disabling the 16-32MB cache on your hard disk. You might be surprised.

The flash buffer should be on the HDD (4, Interesting)

thue (121682) | more than 4 years ago | (#29310335)

The buffer should obviously be on the hard disk. That way the data on the disk will always be in sync, even if there are writes buffered in the flash cache when the computer loses power. I can't see a good reason to put it on the motherboard instead. Especially as most consumer systems have exactly one HDD.

The article says that the flash buffer could work for "all system io". I can only think of optical disks and flash drives possibilities other than hard disks. But optical disks are interchangeable, so they have to be reread on each use anyway, and could just as well be cached in RAM. And it makes no sense to cache flash drives in flash cache...

Do not want (0)

Anonymous Coward | more than 4 years ago | (#29310379)

I want all persistent memory to be detachable or write-protected in hardware. If you want to add something, how about adding a jumper for the write enable line of the BIOS flash memory chip?

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...