Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Phase Change Memory vs. Storage As We Know It

timothy posted more than 4 years ago | from the change-is-constant-and-welcome-to-2010 dept.

Data Storage 130

storagedude writes "Access to data isn't keeping pace with advances in CPU and memory, creating an I/O bottleneck that threatens to make data storage irrelevant. The author sees phase change memory as a technology that could unseat storage networks. From the article: 'While years away, PCM has the potential to move data storage and storage networks from the center of data centers to the periphery. I/O would only have to be conducted at the start and end of the day, with data parked in memory while applications are running. In short, disk becomes the new tape."

Sorry! There are no comments related to the filter you selected.

fp (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#30611308)

frosty piss!!

oops, I mean first post!

CD-R? (0)

Anonymous Coward | more than 4 years ago | (#30611320)

CD-R is a phase change memory. It revolutionized things, but even DVD-Rs and BD-Rs aren't that spectacular these days. Seems holographic discs have more potential if the cost barrier comes down.

Re:CD-R? (4, Insightful)

QuoteMstr (55051) | more than 4 years ago | (#30611354)

Phase change memory is nothing like A CD-R. This stuff has the density of a hard drive, and the speed is very close to DRAM. It's non-volatile to boot. It's a serious contender to become universal memory.

Imagine how different operating systems and programs would be if we could make RAM non-volatile.

Re:CD-R? (2, Insightful)

ceoyoyo (59147) | more than 4 years ago | (#30611400)

"Imagine how different operating systems and programs would be if we could make RAM non-volatile."

Pretty much like they are now? Does anyone actually cold boot their machines anymore?

Now, if RAM were as cheap as hard disks....

Re:CD-R? (0)

Anonymous Coward | more than 4 years ago | (#30611476)

Are you retarded? You really think nothing would be different with non volatile RAM? EVERYTHING would be so much faster.

Re:CD-R? (1)

Jeff DeMaagd (2015) | more than 4 years ago | (#30611502)

If it's so much faster, then there must be a reason it's not being so widely used in the place of DRAM. Price is probably the biggest one, I think it requires the use of four or six transistors per cell rather than one or none. I recall that it's a high power consumer.

Re:CD-R? (2, Interesting)

TheRaven64 (641858) | more than 4 years ago | (#30613552)

It isn't faster than DRAM, it's faster than Flash and hard disks. It is also much more expensive per MB than either: about 4 times as expensive as DRAM at the moment, and very few people are thinking of replacing their persistent storage with battery-backed DRAM.

You seem to be confusing PCRAM with SRAM. Static RAM uses six transistors, while dynamic RAM uses one transistor and one capacitor. This makes it much faster, because you don't have to wait for refresh cycles, but it is a lot less dense and so much more expensive.

Phase change RAM is much more complicated to make, but can be quite dense. The latest versions use four states so you can store two bits per cell, rather than two. Eventually you may be able to store an entire byte in a cell, which would get the density well above DRAM, but the physical phase change is likely to be slower than an electronic switch for a long time, so I expect to see phase change RAM as part of a three tier cache hierarchy (under DRAM and SRAM), at least initially.

Not really (4, Insightful)

oGMo (379) | more than 4 years ago | (#30611796)

Are you retarded? You really think nothing would be different with non volatile RAM? EVERYTHING would be so much faster.

First off most non-volatile RAM isn't nearly as fast as DRAM. So let's assume you mean "what if everything were in DRAM, and that was non-volatile, it would be so much faster". Well, again not really. Faster, but there are far more bottlenecks than just disk I/O. You can go buy ramdisks now, or you could make them in your current RAM, copy the OS there, and run off that after you boot. Go try it. Firefox isn't going to render quicker, your mail isn't going to load any faster, and youtube isn't going to lag any less. If you work with large photos, most software is already going to exhaust your RAM, so (given you have sufficient quantities) you're already not losing anything.

In short, because of modern hard disk and OS caching, the ridiculous quantities of RAM these days, and a current reliance on the network for most tasks, a pure ramdisk system isn't likely to be that much better for most people. If you put a large database or maybe compile there, you would see improvement. But that's not common for most people.

Re:Not really (5, Interesting)

StarsAreAlsoFire (738726) | more than 4 years ago | (#30612338)

"Faster, but there are far more bottlenecks than just disk I/O."

Generally, I disagree with the statement as written. I would say that there are other LIMITS. Not bottlenecks. Although for something like video encoding you could easily turn things around and say 'Look! Your hard-drive is bottlenecked by your encoder!'. Yeah yeah. So I guess I agree more than I want to admit.

Almost by definition, there's always going to be a bottleneck somewhere in your system: the chances of ALL of your PC's components working at *exactly* 100% of their capacity is pretty close to zero. And that's for a particular task. Randomize the task and it all goes to hell. So the question we are discussing is really 'If I remove bottleneck n, how many seconds does it shave of the time to run task x?', averaged over a set of 'common' tasks. But if we made our external drives all as fast as DRAM (or whatever. as above), there would be no other single bottleneck left in the system that you could remove which would give you even a handful of percentage points of improvement. Except maybe un-installing Outlook. Or banning Subversion repositories from your enterprise environment -_-.

For most components in a PC, you have to square the performance to see a significant performance difference, all else being held equal. Tasks that lag noticeably, and that are not dramatically improved by a simple doubling of disk performance, ( 3.5ms seek, 150MB sustained transfer ) are pretty rare. Video encoding, for instance. Certainly getting more common. But with a good video card and a cheap harddrive, you're getting pretty close if not exceeding maximum write speeds on the drive while doing a CUDA rip.

I think that if Microsoft had released a little monitor that displayed the cumulative time spent blocked on [Disk|CPU|Graphics|Memory|Network] (a column in Task Manager, for instance. Hint, hint) back in Windows 95, spinning disks would be considered quaint anachronisms by now. Look at how much gamers spend on video cards, for almost no benefit.

Minute 2 of the Samsung SSD advert: http://www.youtube.com/watch?v=96dWOEa4Djs is pretty interesting, if you haven't seen it yet.

Re:Not really (1)

bored_engineer (951004) | more than 4 years ago | (#30612528)

Why are moderation points never available when I *want* them?

Re:Not really (0)

Anonymous Coward | more than 4 years ago | (#30613980)

> Minute 2 of the Samsung SSD advert: http://www.youtube.com/watch?v=96dWOEa4Djs [youtube.com] is pretty interesting, if you haven't seen it yet.

Yeah, but he's using a RAID of 26 SSD drives. When you can pack that performance into a single 3.5" form factor, let us know.

Re:Not really (2, Informative)

johncandale (1430587) | more than 4 years ago | (#30612660)

You can go buy ramdisks now, or you could make them in your current RAM, copy the OS there, and run off that after you boot. Go try it. Firefox isn't going to render quicker, ....

I have a virtual ramdisk now in XP, with a iso that loads onto it at boot. Firefox with all it's extensions and plug-ins are on it. I can tell you with certainly it loads much faster, pretty much instantly, maybe a half second. I don't really have any delays in rendering so I don't know what you are referring to there. Sure most programs are not going to run much faster (most), but they will load a hell of a lot faster. Very helpful if you close and load different stuff. it is so nice to be able to load firefire with speeddial and see 9 differnt websites loaded 10x faster then you can start word.

Re:Not really (1)

asaz989 (901134) | more than 4 years ago | (#30613136)

I don't think TFA is talking about a desktop workload - it's referring to data centers, where databases and such are precisely the kind of task that needs doing.

Re:Not really (3, Informative)

mindstrm (20013) | more than 4 years ago | (#30614348)

Nonsense.

Certainly "everything" won't be much faster - but we're always after faster storage. I/O is a very common bottleneck. Sticking everything in RAM will make a big difference to a multi-use computer.

IT really depends on the use-case - given enough ram, and a good caching algorithm, and a simple use-case, maybe it won't help once the cache is primed (say serving static content from a fast webserver). Everything ends up in RAM anyway.

But running a system from a fast SSD, or even from a ramdisk, as you say, leads to significant improvements in usability for general-purpose-ADHD-computer use. Apps load instantly.

To go back to the SSD example - more and more people are finding an SSD for a system drive makes things significantly faster, and ram-backed drives DO make databases much faster.

Sure, it won't make the network faster - and if everyone actually bought *enough* ram for the task at hand, caching would take care of it, but for some reason, you know, most poeple don't.

Re:CD-R? (0, Troll)

jameskojiro (705701) | more than 4 years ago | (#30612030)

MS would find a way to Bloat it all up, trust me.

Re:CD-R? (1)

hairyfeet (841228) | more than 4 years ago | (#30612324)

How do you figure that? With 4Gb standard and soon 8-16Gb will be just as cheap (I paid $65 after rebate for 8Gb) most can already run their entire OS in RAM, as well as prefetch the programs they use the most, so how will you get faster than that?

Now I'll be the first to admit for large databases and servers this stuff could rock, but as far as I've seen most with large databases and server farms are already loading their machines with a buttload of RAM, and since you rarely if ever turn machines like that off I don't see it being too big a deal there except backups. Considering pretty much all we have had for backups since the invention of DVD has been HDDs (BD is still too high to be a valid solution to most folks) it would be awesome to come up with something new to back up our giant HDDs with besides another HDD.

But seriously, with huge amounts o' RAM getting to be dirt cheap, and SSDs coming down in price every day, I just don't see it making that big a splash. Now once we hit 128 cores with 64Gb+ of RAM I'm sure that even SSD won't be able to feed that much horsepower, even with RAID. Then I can see PCM making a huge difference. But for my home consumers at least the stuff we have now is already faster than they frankly now what to do with. An AMD or Intel quad with 8-12Gb of RAM is frankly faster than I am able to push buttons.

There reaches a point where for the average Joe the machines are FAR beyond "good enough" and frankly we passed that at dual cores and 3Gb of RAM. Checking my customers logs on follow up a good 85-90% of the time the machine is just twiddling its thumbs because the user just doesn't have enough work for the thing to do as is. Frankly I just don't see how making machines any faster will help Joe average, except maybe if he is one of those "gotta have the big ePeen" types.

Re:CD-R? (0)

Anonymous Coward | more than 4 years ago | (#30612496)

Ok, now be frank with me and tell me how you frankly really feel :-)

Re:CD-R? (1)

mindstrm (20013) | more than 4 years ago | (#30614368)

SSDs are coming down every day - this would be the *next* step past SSDs. People want SSDs because it makes things faster -the same thing will (well, could) happen with this technology.

Everyone says the "Good enoughh for average joe" line every year, about every new technology.... I've tried to use Average Joe's computer often - it usually sucks.

I'd agree - buy enough ram, have a good caching mechanism, and you shouldnt' need all this new quasi-RAM like stuff - but it's semsible that at some point we can leverage more layers of storage with decreased latency and increased speed into our computing models, things will get faster and better.

Re:CD-R? (1)

ceoyoyo (59147) | more than 4 years ago | (#30614002)

Using caps doesn't really make your argument any stronger.

WHAT do you think would be stronger, and WHY? RAM is effectively non-volatile, so long as you don't turn off the power. I never power down my notebook, so it effectively has non-volatile RAM. Ditto for my desktop at the lab.

So if you really think non-volatile RAM makes everything sooo much faster, use the sleep function on your computer.

Re:CD-R? (1)

CAIMLAS (41445) | more than 4 years ago | (#30612388)

Most people cold-boot their computers. Why? Drivers don't properly support suspend (eg. it fails), operating systems and applications leak memory and crash, and generally, the experience becomes unpleasant.

Re:CD-R? (1)

Shoe Puppet (1557239) | more than 4 years ago | (#30613536)

Most people don't even know there is such a thing as hibernating.

Re:CD-R? (1)

ceoyoyo (59147) | more than 4 years ago | (#30614030)

They seriously haven't gotten that figured out yet on the Windows side of the fence?

I assumed Windows, Macs and well configured Linux machines were roughly equal on that score. If it's the case that suspend isn't reliable in Windows, the solution isn't expensive new non-volatile RAM, it's getting your driver and OS manufacturer to not write sucky software.

Re:CD-R? (3, Interesting)

NeuralAbyss (12335) | more than 4 years ago | (#30611408)

Non-volatile? Like all the other "non-volatile RAM, instant-on" technologies that have gone before? MRAM, SRAM, Holographic storage... and now phase-change memory.

I've heard this marketing bullshit before. Call me when it's not vapourware.

Re:CD-R? (0, Insightful)

Anonymous Coward | more than 4 years ago | (#30611786)

Uh yeah, because previous technologies haven't been successful all future ones must be too. and how the fuck can it be marketing bullshit when it doesnt exist. also, BE MORE FUCKING ENTHUSIASTIC about new technology when youre on slashdot.

Re:CD-R? (0)

Anonymous Coward | more than 4 years ago | (#30611908)

Please mod this ridiculous comment to -5. We don't want to encourage children to post on /.

Re:CD-R? (1)

itsenrique (846636) | more than 4 years ago | (#30612190)

more likely this guy submitted this story, or better yet: hes a hype pusher for this phase change storage.

Re:CD-R? (1)

mlts (1038732) | more than 4 years ago | (#30613178)

At least holo storage made it to something concrete... but InPhase markets it as a replacement for optical storage, which is a high end market that companies shell out the big bucks for, so they can have top reliability in WORM archiving for legal reasons.

What would really be remarkable is one of these technologies making it not just to the boutique high end archiving market, but to something that can replace tape drives, ZIP drives, or USB flash drives. Enterprises would be beating down the door of a company who can make something that can be cheaper than tape, but on spinning platters and be easily moved around a robotic autochanger. Especially if the capacity is high enough that significantly fewer pieces of media are needed for storage and transport than the existing hard disk VTL or tape system. If the media company had a standardized way of encrypting the data with AES-256 in hardware that would be even nicer.

Re:CD-R? (1)

mrnobo1024 (464702) | more than 4 years ago | (#30611500)

I don't think that much would change: if some piece of memory is accessible like RAM--that is, it can be modified quickly, with just a single CPU instruction--then for most practical purposes it might as well be volatile memory, because a software bug could easily lead to it being completely wiped.

Re:CD-R? (3, Insightful)

mangobrain (877223) | more than 4 years ago | (#30611552)

The speed of PCM would need to closely match - or exceed - the speed of DRAM for people to adopt it as a replacement, so I doubt the model would quite be one of non-volatile RAM. I imagine it would be more like having a ridiculously fast SSD.

Given the propensity of programs to corrupt and/or leak memory, I'm not sure I'd want my system memory to be non-volatile. The dividing line between system memory and mass storage allows for robustness against errors which, without the ability to reboot, wipe the slate clean and load up the last saved data, might end up being catastrophic. It'd be nice if no program ever had such errors, but this is reality. ;)

If nothing else, programs will always need the ability to serialise data into platform-agnostic formats, unless you expect the world to standardise on one platform or stop sharing data.

Re:CD-R? (1)

mangobrain (877223) | more than 4 years ago | (#30611626)

It would be kind of awesome if files were accessed the same way as memory areas though. Kind of like if everything was transparently mmap-ed, but with the ability to grow/shrink the area, and with the option to have changes reflected immediately on the underlying medium.

    file *foo = new file("/dev/pcm");
    foo->append("Hello world1");
    foo[11] = '!';
    save foo;

(sorry for replying to my own post, forgive me mods!)

Re:CD-R? (2, Informative)

_merlin (160982) | more than 4 years ago | (#30611770)

IBM AS/400 worked like that - the TIMI virtual machine maps all storage into a flat 128-bit address space.

Re:CD-R? (1)

pmontra (738736) | more than 4 years ago | (#30613152)

That's a malloc-like API to storage. I'm pretty sure there is plenty of CS literature on this subject but I don't have the time to google it. I just write a few quick thoughts about it.

If storage is about as fast as RAM you can work on it as if it were RAM, build data structures on it and persist them. The OS will provide ways to share them with other programs. The storage will be a single in-memory object oriented database.

An example: an editor could persist its internal OO data representation of a text file, with a toString method to make that data available to other editors with different data representation. That implies a lot of data duplication (OK only if fast persistent storage is as cheap as today's hard disks) and the need for a file system that merges all those different data representation into a single name so we can find the file and open it for editing (and the compiler for compiling!).

As this PCM storage won't be as cheap as HD a hierarchy of storage is probably desirable with some caching algorithm to move data from slower disks to faster chips.

I don't know if current mainstream OS can survive to this change so this means that market forces will slow down this change and make it very gradual.

Re:CD-R? (1)

Magic5Ball (188725) | more than 4 years ago | (#30611890)

The speed of PCM would need to closely match - or exceed - the speed of DRAM for people to adopt it as a replacement, so I doubt the model would quite be one of non-volatile RAM.

The units they're sampling out now are faster than the main memory on the four year old computer on which I'm typing this response.

The mass popularity of netbooks among both geeks and muggles indicates that fast is no longer the strongest defining feature of a computer.

Re:CD-R? (0)

Anonymous Coward | more than 4 years ago | (#30611590)

moving parts vs. no moving parts

We already have a fast, non-volatile technology: solid state drives. They're expensive right now, which will change as they become more common. Hopefully these companies get their heads strait and come up with some way to market their speed like 4x, 8x etc.. so your laymen can wrap their head around it, and also release a damn 3.5" form factor we've all been waiting for.

Re:CD-R? (1)

cheesybagel (670288) | more than 4 years ago | (#30611848)

Remember like 10 years ago when Intel was claiming Ovonyx memory was going to be the greatest thing since sliced bread? Yeah.

It was going to be the non-volatile memory that would replace hard disks! Then it never happened and Flash memory kept getting cheaper and better. Kinda makes you wonder if this wasn't another trick by the Intel Capital folks to pump up the stock. Then again Ovshinsky [wikipedia.org] basically invented CD-RW/DVD-RW amorphous phase change materials and NiMH batteries so everyone figured out it could actually happen. But it never did. Amorphous silicon solar (a-Si) panels never sold that well either.

Re:CD-R? (1)

IndigoDarkwolf (752210) | more than 4 years ago | (#30612048)

Yeah, computer help desks around the world would have to learn something besides "try rebooting the computer"!

Re:CD-R? (1)

geckipede (1261408) | more than 4 years ago | (#30612142)

This has always bothered me. I like having a seperation of semipermanant starting point and running copy. I don't want the distinction between rebooting and restoring from backups to die.

Re:CD-R? (0)

Anonymous Coward | more than 4 years ago | (#30613054)

Imagine how different operating systems and programs would be if we could make RAM non-volatile

I wouldn't be able to tell Windows users "just reboot and the problem will go away" anymore!

We've heard this forever... (1)

blahplusplus (757119) | more than 4 years ago | (#30611350)

... the death of x tech here, it will eventually die once the groundwork has been laid to migrate to a better system.

Re:We've heard this forever... (2, Funny)

IndigoDarkwolf (752210) | more than 4 years ago | (#30612166)

Long-term data storage is dead! All hail long-term data storage!

Re:We've heard this forever... (1)

davester666 (731373) | more than 4 years ago | (#30612274)

Advances in storage not keeping up with advances in CPU/RAM doesn't make it irrelevant. It puts it squarely on the critical path.

Re:We've heard this forever... (2, Insightful)

TheLink (130905) | more than 4 years ago | (#30612624)

Yes, seriously.

Despite what the article writer thinks, if PCM is that great, the storage manufacturers will just create storage devices that use PCM technology. The other option is to go out of business ;).

I see lots of "normal" people using external storage drives. These people are far less likely to open up their computer and swap chips on their motherboard.

Transferring 1TB from my house to my office by hand is faster and more reliable than using my crappy ISP. If the writer thinks storage IO speeds are bad, he should look at the internet speeds in many parts of the world.

Having your storage on a "drive" makes it easier to upgrade (or even hot-swap), than having it on the motherboard.

Motherboards that allow you to hot swap memory or CPUs tend to be expensive.

Also, stuff that plugs into one motherboard can't always be plugged into next year's new technology motherboard.

Trust me, being able to read the same drive on a totally different computer is something very important.

By the time you've designed a suitable interface, storage format, protocols and physical connectors for all of that, the stuff that plugs into it might as well be called a drive.

And whatever you call it, the storage companies will be building it.

FWIW, I do hope that storage I/O speeds increase dramatically, and very soon. It's already 2010, progress has been rather slow IMO ;).

We're almost there already (1)

Lord Byron II (671689) | more than 4 years ago | (#30611374)

When you can pick up 4GB of RAM memory for a song, why not load the whole OS into memory? As long as you don't suffer a system crash, you can unload it back to disk when you're done.

Re:We're almost there already (1)

tepples (727027) | more than 4 years ago | (#30611430)

When you can pick up 4GB of RAM memory for a song

A song costs 99 cents on iTunes. 4 GB of DDR/DDR2/DDR3 RAM costs far more, and it might not even fit in some older or mobile motherboards.

why not load the whole OS into memory?

Puppy Linux does, and Windows Vista almost does (see SuperFetch).

As long as you don't suffer a system crash

Power failure happens.

Re:We're almost there already (3, Interesting)

mysidia (191772) | more than 4 years ago | (#30611788)

Power failure happens.

That's what journaling is for.

Load the system image into RAM at boot from the "image source".

Journal changes to user datafiles.

When a certain number of transactions have occured, commit them back to the main disk.

If the system crashes... load the "boot volume" back up, replay the journal.

No need to journal changes to the "system files" file system (that isn't supposed to change anyways). If a system update is to be applied, the signed update package gets loaded into the journalling area, and rolled into the main image at system boot.

Another possibility would be to borrow a technology from RAID controller manufacturers... and have a battery backup for the RAM in the form of a NiMH battery pack. If power is lost, upon system boot, the RAM image will be restored to the same state it was in as of the unexpected shutdown/crash.

Avoid clearing the RAM region used for file storage at boot also .

Re:We're almost there already (2, Informative)

davecb (6526) | more than 4 years ago | (#30612140)

Interestingly, this closely resembles the discussion of the system image used in Xerox PARC Smalltalk....

--dave

Re:We're almost there already (1)

fyngyrz (762201) | more than 4 years ago | (#30612154)

That's what journaling is for.

No, that's what a UPS is for. :)

Re:We're almost there already (1)

tepples (727027) | more than 4 years ago | (#30612618)

Then why doesn't a UPS come bundled with name-brand desktop PCs in the way a keyboard, mouse, monitor, and sometimes even a printer do? And why don't sellers of used laptops provide any warranty for the UPS built into the laptop?

Re:We're almost there already (1)

hitmark (640295) | more than 4 years ago | (#30614818)

who buys desktops these days? most people seems to be getting laptops, even tho they run them of mains most of the time. Best thing, they have built in UPS ;)

Re:We're almost there already (1)

mysidia (191772) | more than 4 years ago | (#30612730)

Even UPSes have fuses that can blow / breakers that can trip. A UPS can overload.

Someone can accidentally hit the EPO, or power-off switch on the UPS.

The UPS battery may be too low to permit a graceful shutdown before power expires.

The PC power supply can fail.

Someone could trip over the power cord running to the PC.

Even with a solid UPS, it doesn't require much imagination at all to recognize how likely a power failure or 'hard down' is to occur eventually.

Losing all your data/changes in such cases, is probably unacceptable.

Re:We're almost there already (1)

tepples (727027) | more than 4 years ago | (#30612608)

Journal changes to user datafiles.

When a certain number of transactions have occured, commit them back to the main disk.

What is "a certain number" that won't require the disk to be spun up all the time committing transactions?

the RAM image will be restored to the same state it was in as of the unexpected shutdown/crash.

It will be restored to the same state: a crashed state.

Re:We're almost there already (1)

mysidia (191772) | more than 4 years ago | (#30612760)

What is "a certain number" that won't require the disk to be spun up all the time committing transactions?

Why spun up? use a write-optimized SSD for the journal, and compact flash for the rarely-changing system boot image.

"A certain number", the exact choice is a design/engineering concern, but probably fairly small values should be used, to avoid data loss.

It will be restored to the same state: a crashed state.

Well, of course, the filesystem would be in the same state as at the time of the crash.

That doesn't by any means indicate a second crash will occur.

Re:We're almost there already (1)

jameskojiro (705701) | more than 4 years ago | (#30612050)

>> A song costs 99 cents on iTunes. 4 GB of DDR/DDR2/DDR3 RAM costs far more, and it might not even fit in some older or mobile motherboards.

4GB worth of music on iTunes is going to cost a hell of a lot more than 4GB of system memory. So memory these days can be had for about 40-80 songs....

Re:We're almost there already (1)

nyet (19118) | more than 4 years ago | (#30613050)

Superfetch? You're kidding, right? Real VMs were doing this long before MS figured it out. Unused RAM has always been used as disk cache in proper VMs. Only MS was stupid enough to need an *executable* (smartdrv.exe) to accomplish this most fundamental of tasks.

Re:We're almost there already (2, Informative)

mangobrain (877223) | more than 4 years ago | (#30611442)

You may be able to "load the whole OS into memory", but that's missing the point, which is the data people work with once the OS is up and running. If that 4GB was enough to store all the data for the entirety of any conceivable session, on servers as well as desktops, why would anyone ever buy a hard drive larger than that? Hard drives would probably already be obsolete. I bet you own at least one hard drive larger than 4GB - and as the type of person who comments on slashdot, I bet more than 4GB of that hard drive is currently in use.

TFA is talking about replacing mass storage with PCM. The summary's usage of the phrase "storage networks" should also have been a hint.

Re:We're almost there already (1)

ls671 (1122017) | more than 4 years ago | (#30611664)

> load the whole OS into memory

Replace with "load the whole OS into memory plus the disk content mostly used".

Linux and most OSes already do this for you. Look at the free output below on that 8 Gigs machine. Programs only use 969 Meg (.96 GB) of RAM. Linux has swapped 273 Meg of program memory to disk because it is seldom used (memory leaks ?).

Linux uses 6.9 Gig for buffers/cache which is more than the whole OS loaded into memory. It caches disk content into RAM, so in the end, there is only 45 Meg not used at all.

Crank up the memory on your system and gain speed by 1 or 2 order of magnitude for recurrent tasks, I will never tell people enough about this ;-) The future will be here today ! ;-)

$ free
                          total used free shared buffers cached
Mem: 7939428 7893512 45916 0 19272 6904604
-/+ buffers/cache: 969636 6969792
Swap: 17326008 273264 17052744

Re:We're almost there already (1)

mindstrm (20013) | more than 4 years ago | (#30614418)

a) The bottleneck in pricing, I don't see 64 gig memory modules on the cheap, or supported by any motherboards yet.
b) The initial load of data (whether prefetch or whatever) that I want to work with is still constrained by whatever it's stored on.

I'd love to have a few terabytes of ram. That would work for me... and that's where we're heading. how the OS manages the various levels of RAM (as cache, storage, or whatever) is up for debate, I'm sure we'll see some interesting mechanisms.
(like how ZFS can have an SSD assigned as a cache drive for a given storage pool, -vs- the home user putting system files and software on the SSD ,and using regular storage for data, etc)

So yes - people SHOULD get systems with a lot more memory. Lots of memory is good. ('m a "no swapfile" guy myself. If I don't have enough physical RAM To do what I need, then I need more ram - not the gradual slowdown that swap brings. Yes, I know there are counter-arguments to this. All of them can be refuted by simply buying more ram.)

Re:We're almost there already (4, Interesting)

Paradigm_Complex (968558) | more than 4 years ago | (#30611722)

When you can pick up 4GB of RAM memory for a song, why not load the whole OS into memory?

For what it's worth, you can do this with most Linux distros if you know what you're doing. Linux is pretty well designed to act from a ramdisk - you can set it up to copy the system files into RAM on boot and continue from there all in RAM. I've been doing this on my Debian (stable) boxes when I realized I couldn't afford a decent SSD and wanted a super-responsive system. Firefox (well, Iceweasel) starts cold in about two seconds on an eeepc when set up this way, and it starts cold virtually instantly on my C2D box. In fact, everything seems instant on my C2D box. It's really snazzy.

As long as you don't suffer a system crash, you can unload it back to disk when you're done.

Depending on what you're doing, even that may not be an issue. If you're doing massive database stuff, then yes. However, if your disk I/O isn't all heavy you can set a daemon up to automatically mirror changes made in the RAMdisk to the "hard" copy. From your POV everything is instant, but any crash will only result in the loss of data from however far behind the harddrive copy is lagging. Personally, what little I do need saved is simply text files - my notes in class, my homework, etc, and so I can just write to a partition on the harddrive that isn't loaded to RAM. It doesn't suffer at all from the harddrive I/O - I can't really type faster then a harddrive can write.

tl;dr: It's perfectly feasible for (some) people to do as you've described, and it works quite nicely. It's not really necessary to wait for this perpetually will-be-released-in-5-to-10-years technology, it's available today.

Re:We're almost there already (2, Insightful)

puto (533470) | more than 4 years ago | (#30612582)

You were able to to buy a 10 meg ram drive in the late 1980s and do this, so this is nothing new. You just are.

Re:We're almost there already (1)

Urkki (668283) | more than 4 years ago | (#30613388)

Depending on what you're doing, even that may not be an issue. If you're doing massive database stuff, then yes. However, if your disk I/O isn't all heavy you can set a daemon up to automatically mirror changes made in the RAMdisk to the "hard" copy. From your POV everything is instant, but any crash will only result in the loss of data from however far behind the harddrive copy is lagging. Personally, what little I do need saved is simply text files - my notes in class, my homework, etc, and so I can just write to a partition on the harddrive that isn't loaded to RAM. It doesn't suffer at all from the harddrive I/O - I can't really type faster then a harddrive can write.

tl;dr: It's perfectly feasible for (some) people to do as you've described, and it works quite nicely. It's not really necessary to wait for this perpetually will-be-released-in-5-to-10-years technology, it's available today.

Not just "available", but that's pretty much how all current operating systems work today. Software operates on a copy in memory (wether reading or writing), and OS writes back any changes at it's leisure. It's just a matter of available RAM vs. required RAM, and only if you run out of RAM, only then the disk becomes a bottleneck. I don't think data read from disk to memory is ever discarded even if unused for a long time, unless you run out of RAM (why would it be, that's just unnecessary extra work for OS when there's plenty of unused ram available already).

Names (-1)

Anonymous Coward | more than 4 years ago | (#30611390)

Can we at least call it something else. Imagine talking to a female co-worker and describing the problem as the phase has changed, she'd think you were talking about her "time of the month!"

Re:Names (-1, Offtopic)

Ethanol-fueled (1125189) | more than 4 years ago | (#30611698)

Period sex is the best. Howto: Period Sex
  • First, plan in advance where you will actually do "it" since there is more preparation involved. The shower is a good place for the uninitiated, but the squeamish should not look down at the drain lest they suffer Norman Bates flashbacks.
  • It is easy to do it on a bed, but make sure that plenty of black towels (which don't stain red) are handy. Be sure to ensure that the the dropcloth(s) is(are) sufficiently thick so as not to allow the menstrual fluid to "bleed" into the bedspread. Mind the movement, it is easy to lose track of the dropcloth twisting and losing coverage.
  • Many women are insecure about their inner workings and are not open to period sex. To convince her to do it, you must tell her that she should not have to feel dirty about what comes naturally to her. Stick a finger or two into her, then suck it off your own fingers before licking your chops and telling her that she tastes wonderful.
  • Optional: using your fingers, paint stripes on your face like a warrior Indian Chief and yell, "Boobooboobooboo" while stopping your voice with your hand. She will giggle and decide to let you in.
  • Depending on where she is in her menstrual cycle, you may notice "chunks", which are blood clots. Do not be disgusted or alarmed. They are a natural result of the sloughing of the uterine lining and do not affect feel.
  • Be Gentle. She will be experiencing aches and pains and she will be sensitive. Period sex is totally awesome because it is much warmer and more lubricated, but move slowly and avoid biting her nipples. Period sex has been proven to reduce menstrual cramping when done right.
  • Menstrual fluid tastes like blood, but menstruation is often more subdued than typical bleeding. You need not worry about sickness while testing and playing with menstrual fluid, unless your partner has a sexually transmitted infection.
  • If you do not enjoy period sex, you are a disgrace to yourself and to your fellow men. Euthanize yourself immediately.

Re:Names (3, Funny)

Nikker (749551) | more than 4 years ago | (#30611754)

You are a god amongst men but you waste all your knowledge upon this tribe.

CD-RW (-1, Redundant)

Anonymous Coward | more than 4 years ago | (#30611434)

CD-RW is/was phase change technology...

Why the vapourware tag? (4, Insightful)

Areyoukiddingme (1289470) | more than 4 years ago | (#30611522)

How soon we forget. The article is speculative, sure, but the hardware is not only real, it's in mass production by Samsung: http://hardware.slashdot.org/article.pl?sid=09/09/28/1959212 [slashdot.org]

Just looking at the numbers, the article is a bit overblown. Phase change memory will first be a good replacement for flash memory, not DRAM. It's still considerably slower than DRAM. But it eliminates the erasable-by-page-only problem that has plagued SSDs, especially Intel SSDs, and the article does mention SSDs as a bright spot in the storage landscape. PCM should make serious inroads into SSDs very quickly because manufacturers can eliminate a whole blob of difficult code. With Samsung's manufacturing muscle behind it, prices per megabyte should be reasonable right out of the gate and as Samsung gets better at it, prices should plummet even faster than flash memory did.

The I/O path between storage and the CPU will get an upgrade, and it could very well be driven by PCM. Flash memory SSDs are already very fast and PCM is claimed to be 4X faster. That saturates the existing I/O paths (barring 16-lane PCIe cards sitting next to the video card in an identical slot). Magnetic hard drives haven't come anywhere close to saturation. Development concentrated for a decade (or two?) on increasing capacity, for which we are thankful, but the successes in capacity development have outrun improvements in I/O speed. In turn, that meant that video cards were the driver behind I/O development, not storage. Now that there's a storage tech in the same throughput class as a video card, I expect there to be a great deal of I/O standards development to deal with it.

But hard drives == tape? Not for a long long time. The development concentration on increasing capacity will pay off for many years to come. PCM arrays with capacities matching modern hard drives (2 TB in a 3.5" half height case. Unreal!) are undoubtedly a long ways off.

Hopefully there are no lurking patent trolls under the PCM bridge...

Re:Why the vapourware tag? (2, Insightful)

maxume (22995) | more than 4 years ago | (#30611650)

The only thing plaguing Intel SSDs is price. And I don't think that particular aspect makes Intel real sad.

Re:Why the vapourware tag? (1)

hedwards (940851) | more than 4 years ago | (#30612504)

Ultimately it does. If they're making say $20 per unit on something, they'd be better off if that thing was selling for $100 than $1000. Sure in reality the margins usually shrink somewhat as the price goes down, but generally so does the cost of production. It's unlikely indeed that Intel's making more money like this than they would be if they could produce the drives for less money.

Re:Why the vapourware tag? (2, Interesting)

drinkypoo (153816) | more than 4 years ago | (#30612138)

It probably got tagged vaporware because where the fuck is my system with MRAM for main memory? MRAM is a shipping product, too, but it was "supposed" to be in consumer devices before now, as main System RAM.

Re:Why the vapourware tag? (2, Insightful)

Ropati (111673) | more than 4 years ago | (#30612392)

Kevin has this right, what an obtuse article.

Henry Newman is talking about PC storage not enterprise storage. He discusses all disk IO performance in MBs/sec, meaning sequential. When in reality, very little (disk level) IO for the enterprise is sequential. The numbers here are flawed as is the characterization of storage.

Storage is where we keep our data. Keeping data is a central requirement of information technology. It will never be a peripheral feature.

Presently the real IO bottleneck is the spinning platter and the requirements of getting a read/write head to the right place quickly. Newer solid state storage devices will alleviate this bottleneck in the very near future. Perhaps PCM is the solution, but I for one will wait for a GB/$ threshold at which time the winning solid state storage will be available to everyone.

Mr. Newman talks about inter-computer bus speeds as not keeping up with CPUs and memory, when in fact they keeping up. The place where data transport still can't keep up, is serially on a single transport, (wire or optical). Networked (switchable) data needs to be serial single transport for a number of obvious reasons. Like the platter, this is a physical limitation and not easily surmounted.

If and when we get +10GB/sec consumer networks, storage networks (transporting SCSI blocks) will become a thing of the past as we pass and store all our data in an application aware protocol.

Slashdot is computer science (-1, Troll)

Singularity42 (1658297) | more than 4 years ago | (#30611544)

I thought most of slashdot is computer-science only. What is the big-Oh notation of PCM? Most of slashdot are not engineers--they are computer scientists. Thus, they essentially are wannabee mathematicians.

Re:Slashdot is computer science (1)

maxume (22995) | more than 4 years ago | (#30611666)

I imagine that is a generous characterization.

There seem to be plenty of not-even-computer-related engineers and students here (and others too!), if someone reads me in the wrong direction.

Re:Slashdot is computer science (0)

Anonymous Coward | more than 4 years ago | (#30611922)

Troll detected.

Eh. I've worked in the electronics industry long enough to know that plenty of engineers are also fuckin' morons. It's bad enough that floor scum like me had to save my company a shitload of effort by recommending something as simple and common-sense as the notion of spare connectors also being connector-savers during test. Previously they'd had to shut the whole station down weekly because of pushed pins before they replaced the backplane.

The proles have an excuse - being uneducated. You engineers shouldn't be making these basic common-sense fuckups and yet I continue to see it time and time again. What do you do all day, anyway? We all know you made it though calculus 5 and quantum physics...and yet you still continue to overlook the most obvious shit that a lowly machinist or even the fucking coffee boy would've caught.

There was a recent article about engineers being more likely to turn to terror. Maybe it all comes to the bitterness of having missed out on all that pussy in college. But you da man, Mr. Elitist, you da man.

Re:Slashdot is computer science (1)

glwtta (532858) | more than 4 years ago | (#30612174)

There was a recent article about engineers being more likely to turn to terror. Maybe it all comes to the bitterness of having missed out on all that pussy in college. But you da man, Mr. Elitist, you da man.

Whoa, methinks someone struck a nerve.

Re:Slashdot is computer science (1)

maxume (22995) | more than 4 years ago | (#30614320)

You misread my intent.

disk becomes the new tape (1)

ls671 (1122017) | more than 4 years ago | (#30611564)

> disk becomes the new tape

Well they got this right even if it was not to be accomplished with the mentioned technology.

I think that in the medium/long time range this will undoubtedly come true.

I mean, would any /. reader bet on the chances of hard drives to come on par with today memory access speeds in the future, even with zillions of years of technological advancement ?

     

The 70's called. They want their I/O methods back. (4, Informative)

fahrbot-bot (874524) | more than 4 years ago | (#30611600)

From TFA:

There is no method to provide hints about file usage; for example, you might want to have a hint that says the file will be read sequentially, or a hint that a file might be over written. There are lots of possible hints, but there is no standard way of providing file hints...

Ya, we had that back in the stone-age and Multics would have been poster-child for this type of thinking, but it was a *bitch* and made portability problematic. I think VMS has some of this type of capability with their Files 11 [wikipedia.org] support - any VMS people care to comment. Unix (and most current OS) sees everything as a stream of bytes, in most cases, and this is much simpler.

An OS cannot be everything to all people all the time...

fadvise (2, Informative)

Anonymous Coward | more than 4 years ago | (#30611716)

fadvise and FADV_SEQUENTIAL [die.net] exist in posix. Not sure how well different oses like Linux or bsd use the hints -- I know that some of it's been broken because of bad past implementations.

Re:The 70's called. They want their I/O methods ba (4, Informative)

Guy Harris (3803) | more than 4 years ago | (#30611760)

From TFA:

There is no method to provide hints about file usage; for example, you might want to have a hint that says the file will be read sequentially, or a hint that a file might be over written. There are lots of possible hints, but there is no standard way of providing file hints...

Ya, we had that back in the stone-age and Multics would have been poster-child for this type of thinking, but it was a *bitch* and made portability problematic.

No, Multics would have been the poster child for "there's no I/O, there's just paging" - file system I/O was done in Multics by mapping the file into your address space and referring to it as if it were memory. ("Multi-segment files" were just directories with a bunch of real files in them, each no larger than the maximum size of a segment. I/O was done through read/write calls, but those were implemented by mapping the file, or the segments of a multi-segment file, into the address space and copying to/from the mapped segment.)

I think VMS has some of this type of capability with their Files 11 [wikipedia.org] support - any VMS people care to comment. Unix (and most current OS) sees everything as a stream of bytes, in most cases, and this is much simpler.

"Seeing everything as a stream of bytes" is orthogonal to "a hint that the file will be read sequentially". See, for example, fadvise() in Linux [die.net] , or some of the FILE_FLAG_ options in CreateFile() in Windows [microsoft.com] (Windows being another OS that shows a file as a seekable stream of bytes).

Re:The 70's called. They want their I/O methods ba (1)

mysidia (191772) | more than 4 years ago | (#30611846)

We have it today. Tfa's on crack.

It's called madvise [die.net]

It allows an application to tell the kernel how it expects to use some mapped or shared memory areas, so that the kernel can choose appropriate read-ahead and caching techniques.

In Linux there is also fadvise() [die.net]

Of course... reading from a file (from an app point of view) is really nothing more than accessing data in a mapped memory area. Oh.. I suppose unless you actually use the POSIX mmap call to map the file into memory for reading, you won't have an easy ability to provide the advise.

And it makes portability a bitch regardless, as not all OSes are POSIX, and not all OSes have mmap().

Nevertheless, it's not fair to say it is impossible for an app to provide hints. Whether giving the hints or not actually has a useful effect (usually) may be a matter of debate.

Numonyx will probably make it happen (4, Informative)

AllynM (600515) | more than 4 years ago | (#30611602)

Numonyx announced some good advances in PCM a few months back:

http://www.pcper.com/comments.php?nid=7930 [pcper.com]

Allyn Malventano
Storage Editor, PC Perspective

Re:Numonyx will probably make it happen (1)

wa7iut (1711082) | more than 4 years ago | (#30612276)

Their marketing has got the phase change physics wrong though. Water is not a good substance to try to make an analogy with GST in this case. Ice is crystalline. liquids are not either crystalline or amorphous. There's no amorphous phase of water analogous to the amorphous phase of GST. The phase change in GST that represents a 0 or 1 is between a crystalline solid phase and an amorphous glass solid phase. Shrinking the bit cell does not make life easier either, certainly not the "slam dunk" they portray. There are thermal management and materials issues that are challenging to say the least as the bit cell reduces.

This "author' is pretty much irrelevant (4, Insightful)

Zero__Kelvin (151819) | more than 4 years ago | (#30611636)

"I will assume that this translates to performance (which it does not) ..."

I was tempted to stop reading right there, but I kept reading. While his point about POSIX improvements is not bad, the rest of the article is ridiculous. It essentially amounts to: Imagine if we had pretty much exactly what we have today, but we used different words to describe the components of the system! We already have slower external storage (Networked drives / SANs, local hard disk), and incremental means of making data available locally more quickly by degrees (Local Memory, L2 Cache, L1 Cache, etc.) We already get that at the expense of its ability to be accessed by other CPUs a further distance away. It turns out I probably should have stopped reading when I first got the feeling I should when reading the first sentence in the article: "Data storage has become the weak link in enterprise applications, and without a concerted effort on the part of storage vendors, the technology is in danger of becoming irrelevant." I can't wait to answer with that one next time and watch jaws drop:

Boss: Where and how are we storing our database, how are do we ensure database availability, and how are we handling backups?
me: You're behind the times Boss. That is now irrelevant!

Yeah. That's the ticket ...

It is about time (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#30611750)

I am with Linus on this one
Linus is right
The man makes sense
He is absolutely correct on this one

Plastique explosives plus hard drive (0)

Anonymous Coward | more than 4 years ago | (#30611764)

equals phase change memory

Re:Plastique explosives plus hard drive (3, Funny)

v1 (525388) | more than 4 years ago | (#30611800)

or just toss your HD in a forge furnace. You should get two phase changes.

Re:Plastique explosives plus hard drive (1)

jameskojiro (705701) | more than 4 years ago | (#30612072)

If you have a hot enough furnace you may even get THREE phase changes.

If you put it in the LHAC you may get FOUR!

Re:Plastique explosives plus hard drive (1)

v1 (525388) | more than 4 years ago | (#30614306)

third state change, not third state

What to do with solid-state memory? (1)

Animats (122034) | more than 4 years ago | (#30611952)

The real question is whether we need something other than read/write/seek to deal with the various forms of solid-state memory. The usual options are 1) treat it as disk, reading and writing in big blocks, and 2) treat it as another layer of RAM cache, in main memory space. Flash, etc. though have much faster "seek times" than hard drives, and the penalty for reading smaller blocks is thus much lower. Flash also has the property that writing is slower than reading, while for disk the two are about the same. For small I/O operations, the operating system overhead for the operation takes more time than the actual data access.

For most end users, permanent storage is for storing big sequential files, audio or video. There are interfaces that would make databases faster (one could have flash devices that implemented a key/value store, with onboard lookup), but nobody would notice when playing video. The trend in databases is already to get enough RAM to keep all the indices in RAM, so we're already doing the "read it in the morning" thing suggested in the article. So the payoff for building flash devices to help with that is modest.

There are interesting things to do in this space, but improving reliability in the RAID sense is probably more important than speeding up non-sequential small accesses.

Is there some kind of a prize? (0)

Anonymous Coward | more than 4 years ago | (#30611996)

Access to data isn't keeping pace with advances in CPU and memory, creating an I/O bottleneck that threatens to make data storage irrelevant.

Data storage. Irrelevant. I.. see. The new year is not yet 14 hours old but I feel a certain confidence that this will be the single most vacuous thing I encounter in 2010 - and I've already seen Entertainment Tonight this year.

Boon for Linux, Bust for Windows. (1)

jameskojiro (705701) | more than 4 years ago | (#30612070)

Windows is more closely tied to the whole "Separate levels of RAM memory and Hard Disk Memory" than Linux is I could really see Linux get more traction of all systems went to PCM tomorrow.

Forgetting the lessons of SANs? (4, Interesting)

HockeyPuck (141947) | more than 4 years ago | (#30612196)

Maybe these guys ought to ask someone that was around in the days BEFORE there were SANs. Managing storage back then absolutely sucked. Every server had it's own internal storage with it's own raid controller OR had to be within 9m (the max distance of LVD SCSI) of a storage array.

There was no standardization, every OS has it's own volume managers, firmware updates, patches etc etc etc. Plus compare the number of management points when using a SAN vs internal storage. An enterprise would have thousands of servers connecting through a handful of SAN switches to a handful of arrays. Server admins have more important things to do than replace dead hard drives.

Want to replace a hot spare on a server, what a pain. As you had to understand the volume manager or unique raid controller in that specific server. I personally like how my arrays 'call home' and an HDS/EMC engineer shows up with a new drive, replaces the failed one and walks out the door, without me having to do anything about it.

Two words: Low Utilization. You'd buy an HP server with two 36GB drives and the OS+APP+data would only require 10GB of space. So you'd have this land locked storage all over the place.

Moving the storage to the edge? Even if you replace spinning platters with solid state, putting all the data on the edge is a 'bad thing.'

"But Google does it!"

Maybe so, but then again they don't run their enterprise based upon Oracle, Exchange, SAP, CIFS/NFS based home directories etc like almost all other enterprises do.

The SAN argument (4, Interesting)

symbolset (646467) | more than 4 years ago | (#30612810)

The SAN argument is that your storage is so precious it must not be stranded. If you're paying $50K/TB with drives, controllers, FC switches, service, software, support, installation and all that jazz then that's absolutely true. If you're doing something like OpenFiler [openfiler.com] clusters on BackBlaze 90TB 5U Storage Pods [backblaze.com] for $90/TB and 720 TB/rack you have a different point of view. As for somebody showing up to replace a drive, I think I could ask Jimmy to put his jacket on and shuffle down to the server room to swap out a few failed drives every couple months - that's what hot and cold spares are for and he's just geeking on MyFace anyway. Low utilization? Use as much or as little as you like - at $90/TB we can afford to buy more. We can afford to overbuy our storage. We can afford to mirror our storage and back it up too. In practice the storage costs less than the meeting where we talk about where to put it or the guy that fills it. If you want to pay for the first tier OEM, it's available but costs 10x as much because first tier OEMs also sell SANs.

Openfiler does CIFS/NFS and offers iSCSI shared storage for Oracle, Exchange and SAP. If you need support, they offer it. [openfiler.com] OpenFiler is nowhere near the only option for this. If you want to pay license fees you could also just run Windows Server clustered. There are BSD options and others as well. Solaris and Open Solaris are well spoken of, and ZFS is popular, though there are some tradeoffs there. Nexenta [wikipedia.org] is gaining ground. There's also Lustre [wikipedia.org] , which HP uses in its large capacity filers. Since you're building your own solution you can use as much RAM for cache as you like - modern dual socket servers go up to 192GB per node but 48GB is the sweet spot.

Now that we've moved redundancy into the software and performance into the local storage architecture, moving storage to the edge is exactly what we want to do: put it where you need it and if you need a copy for data mining then mirror it to the mining storage cluster. We still need some good dedicated fiber links to do multisite synchronous replication for HA, but that's true of SAN solutions also. We're about 20 years past when we should have ubiquitous metro fiber connections, and that's annoying. Right now without the metro fiber the best solution is to use application redundancy: putting a database cluster member server in the DR site with local shared storage.

Oh, and if you need a lot of IOPS then you choose the right motherboard and splurge on the 6TB of PCIe attached solid state storage [ocztechnology.com] per BackBlaze pod for over a million IOPs over 10Gig E. If you need high IOPS and big storage you can use adaptor brackets [ocztechnology.com] and 2.5" SSDs or mix in an array of The Collossus [newegg.com] , though you're reaching for a $6K/TB price point there and cutting density in half but then the SSD performance SAN has an equal multiple and some serious capacity problems. If you go with the SSD drives you would want to cut down the SAS expanders to five drives per 4x SAS link because those bad boys can almost saturate a 3Gbps link while normal consumer SATA drives you can multiply 3:1.

If you're more compute focused then a BackBlaze node with fewer drives and a dual-quad motherboard with 4 GPGPUs is a better answer. At the high end you're paying almost as much for the network switches as you are for the media. If you're into the multipath SAS thing then buy 2x the controllers and buy the right backplanes for that - but it will cost a good bit more.

When Google first did this it was avant-garde. Now it's just simple. This stuff is managed with a web GUI and all the hard stuff is figured out. As for distance, we've worked that out with Ethernet.

/I don't work for or with any of these vendors.

Re:The SAN argument (0)

Anonymous Coward | more than 4 years ago | (#30613158)

The SAN argument is that your storage is so precious it must not be stranded...If you're doing something like OpenFiler

<chop>

The OpenFiler argument is that Capital costs (buying a storage solution) involve more scrutiny than recurring Operating costs (staff labor.) This occurs in dysfunctional or under-captialized organizations. Of course, many people work in such organizations. So many, in fact, that the well managed and/or well capitalized organizations may actually be the exceptions.

Re:The SAN argument (1)

dkf (304284) | more than 4 years ago | (#30613556)

The OpenFiler argument is that Capital costs (buying a storage solution) involve more scrutiny than recurring Operating costs (staff labor.) This occurs in dysfunctional or under-captialized organizations. Of course, many people work in such organizations. So many, in fact, that the well managed and/or well capitalized organizations may actually be the exceptions.

Fundamentally, that's because for a lot of organizations it is easier to cut capital costs (by canceling or postponing) than staff costs. The problem with cutting staff? You lose the knowledge that those people have, and the chances are that they will have a lot locked up in their heads that isn't written down, no matter what policies you have in place to mitigate this. Recovering from a round of staff cuts can take years, recovering from delaying the purchase of a piece of kit for a year takes not much more than a year and (provided there's nothing gone catastrophically wrong with the old equipment in the meantime) can actually take less in some senses.

If you're working somewhere with plentiful capital budgets, I envy you. (I also expect that you'll probably be growing soon, and that before too long those capital budgets won't seem nearly so plentiful...)

Re:The SAN argument (1)

mindstrm (20013) | more than 4 years ago | (#30614458)

But you don't buy Backblaze storage pods, right? Backblaze is an online service - they built them for themselves as I understand it.

Yes - there are excellent OSS solutions - if you can keep and maintain an engineering staff who can keep up to speed with things, and build things out, you can absolutely build out lots and lots of storage, and maintain it. Jimmy can swap drives. No problem.

The problem is - as a business grows (that's what they want to do) - this could become unmaintainable. Staffing becomes more difficult. All problems become in-house problems rather than vendor problems (and unfortunately, politics matter). In the end - the commercial SAN/NAS setup cost is nothing compared to the burn rate of the organization.

Threatens to make data storage irrelevant? Hardly! (1)

tomhudson (43916) | more than 4 years ago | (#30612682)

Access to data isn't keeping pace with advances in CPU and memory, creating an I/O bottleneck that threatens to make data storage irrelevant

It's because data storage will ALWAYS be relevant (talk to any Alzheimers' patient if you don't believe me) that access speeds are a concern.

Re:Threatens to make data storage irrelevant? Hard (0)

Anonymous Coward | more than 4 years ago | (#30614232)

> It's because data storage will ALWAYS be relevant (talk to any Alzheimers' patient if you don't believe me) that access speeds are a concern.

I think he means, if RAM is persistent and you have the equivalent of a hard drive in bytes, why would you need to store anything that's already in memory??

This does not kill the SAN (1, Interesting)

Anonymous Coward | more than 4 years ago | (#30613070)

I don't think the author knows much about the purpose of a SAN. A SAN is not just a disk array giving you faster access to disks. Local storage that is faster does not help you with concurrent access (clusters), rollback capability(Snapshots, mirror copies \ point in time server recovery), site recovery(off sited mirrors) or substantial data compression gain through technologies like deduplication.

As for speed, my SAN is giving me write performance in the range of 600mbytes/sec per client. I access my storage over a 10gbit ethernet backbone. Certainly suboptimal, but my blades have a pair of nics and no disks. It's cheap, very fast and I have 3-4 rollback points for my ESX cluster. Thats around 200 VM's in two sites, active, active cross recoverable.

The SAN is not going away.

(In case any of you are desiging and want the part list I'm talking about Cisco Nexus 5020 10Gbe backbone, Bluearc Mercury 100 cluster with disks slung on a HDS USP-VM. 64gb cache depth on each path and a few hundred tb of disk. Servers are HP BL495 G6's, with Chelsio cards. Chassis has BNT(HP) 10gbe switches. I haven't even started with Jumbo's yet, I can do better, but this is pretty good for now. All up it was just over a mil AUD).

Whats this? It's a faster storage device. Thats a fairly small part in a SAN.

Why not just normal RAM? (1)

Casandro (751346) | more than 4 years ago | (#30613130)

I mean what's the advantage of phase change memory in this scenario? If you loose power to your CPU or your system crashes, you will have effectively lost your memory content anyhow. So you might as well open your files with mmap and have lots of RAM. The system will automagically figure out what to swap to disk if RAM isn't enough as well as it will regularly backup the contents do disk.

microdisk Radio? (1)

Doc Ruby (173196) | more than 4 years ago | (#30613904)

Is anyone working on micromachines (MEMS) that set vast arrays of very tiny storage discs into very tiny radio transmitters, each disc transceiving on its own very narrow frequency band? A 1cm^2 chip, perhaps stacked a dozen (or more) layers thick, delivering a couple hundred million discs per layer, each holding something like 32bits per microdisc and a GB per layer, streaming something like 2-200Tbps per layer, seek time 10ns, consuming a few centiwatts per layer.

Or skip the radio and just max out a multimode fiber throughput. Parallelizing data transfer should leave stored data transferrable entirely in under 250ms.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?