Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

How To Use a Terabyte of RAM

kdawson posted more than 6 years ago | from the every-factor-of-25-helps dept.

Data Storage 424

Spuddly writes with links to Daniel Philips and his work on the Ramback patch, and an analysis of it by Jonathan Corbet up on LWN. The experimental new design for Linux's virtual memory system would turn a large amount of system RAM into a fast RAM disk with automatic sync to magnetic media. We haven't yet reached a point where systems, even high-end boxes, come with a terabyte of installed memory, but perhaps it's not too soon to start thinking about how to handle that much memory.

cancel ×

424 comments

Sorry! There are no comments related to the filter you selected.

1 TB of memory... (5, Funny)

Digi-John (692918) | more than 6 years ago | (#22810556)

Finally, I'll have enough space to run Firefox, OpenOffice, and Eclipse *all at the same time*! As long as I don't leave Firefox running too long.

Re:1 TB of memory... (4, Funny)

smitty_one_each (243267) | more than 6 years ago | (#22810672)

You are wise to avoid discussion of emacs...

Re:1 TB of memory... (2, Informative)

Digi-John (692918) | more than 6 years ago | (#22810814)

emacs is a Lisp interpreter, an editor, a games package, an irc client, many things, but its memory usage is just a drop in the bucket compared to the monstrosities I mentioned above. Of course, there's quite a few complete operating systems that can boot in the amount of RAM required by emacs :)

Re:1 TB of memory... (5, Funny)

digital bath (650895) | more than 6 years ago | (#22810936)

Of course, there's quite a few complete operating systems that can boot in the amount of RAM required by emacs :)

emacs, for starters

Re:1 TB of memory... (2, Insightful)

dgatwood (11270) | more than 6 years ago | (#22811404)

Eighty Megamebibytes And Constantly Swapping?

Besides, isn't it obvious how one should use a terabyte of RAM? Use it to upgrade your PC to run Windows Vista MegaUltimate, of course. :-D

Or, to put it another way, the question of what to do with the extra RAM is a non-issue. Install software. Software developers will find a way to waste as much RAM as you can put in and performance will still be slow. It's just the nature of progress....

*sigh*

Re:1 TB of memory... (5, Funny)

osu-neko (2604) | more than 6 years ago | (#22810958)

Of course, there's quite a few complete operating systems that can boot in the amount of RAM required by emacs

Yes, but they're substantially less functional operating systems than Emacs.

Re:1 TB of memory... (4, Interesting)

jrockway (229604) | more than 6 years ago | (#22811310)

It's interesting how times have changed. Over the years, emacs has used pretty much the same amount of memory. (My big emacs with erc and gnus is using about 67M right now. Firefox is using 1.7G.)

In the 80s, the overhead of a lisp machine just to make your application customizable was absurd (hence the emacs jokes). Writing an editor all in C was a great idea. Speed! Memory savings! This approach made vi very popular.

Now that it's 2008 and every new computer has a few gigs of RAM, it's not so absurd to write an editor in a dynamic language running on top of a minimal core. An experienced elisp coder can add non-trival functionality to emacs in just a few hours. emacs makes that easy and enjoyable.

vi(m) may use less memory, but that just doesn't matter anymore. If you want to customize it (non-trivially), you have to hack vim and recompile. So while emacs jokes are hilarious, it dates you to the early 80s. There is no reason to write tiny apps in assembly anymore. Big apps that can be extended are a much better approach.

Re:1 TB of memory... (5, Insightful)

frovingslosh (582462) | more than 6 years ago | (#22811280)

I'm not sure why people are rating your post as funny. I have not had moderator points in a long time, but if I did I would mark it insightful.

As to the problem of how to use 1 TB of RAM, spending any time at all thinking of this is foolish and wasteful. Of course, I remember the days when we rated our computers in how many kilo bytes of memory we had, and plenty of readers here will remember having 20 to 40 meg hard disks in PC's with far less than 1 meg of physical RAM memory. In those days (and I'll avoid the famous Bill Gates quote on the subject), how would you have spent your time deciding what to do with the memory if you had a computer with 1 gig, 2 gig or even 4 gig of memory? You may have come up with all sorts of amazing ideas. But none of them would have done you any good, because the developers (Mostly Microsoft, but Linux is far from lean and mean any more either) already decided what to do with it, waste it and leave you wanting more. And one of your ideas for a 4 gig system might not have even been to just pretend that most of the last gig of memory wasn't there and ignore it!

So why even have a post about what to do with a terabyte of memory? The solution is simple, install Windows 9 and try to quickly order more memory on-line before the memory hungry service pack comes out, forces it's install on you, and your TB isn't enough.

Re:1 TB of memory... (4, Interesting)

Bandman (86149) | more than 6 years ago | (#22811394)

virtual machines. lots of 'em

You only need 16GB of RAM for this to be useful (5, Insightful)

2nd Post! (213333) | more than 6 years ago | (#22810564)

Given that the core components of an OS are only a few GB, even 8GB systems might be able to do this, today.

Re:You only need 16GB of RAM for this to be useful (3, Funny)

Digi-John (692918) | more than 6 years ago | (#22810732)

640K should be enough for anyone.

Re:You only need 16GB of RAM for this to be useful (0)

Anonymous Coward | more than 6 years ago | (#22810788)

This sounds like something I could use right now on my laptop which still uses a conventional harddrive.
Surfing the web, all the writes and reads are related to the browser writing things to the cache, transparently mounting this FS over it and being able to have the relevant data would allow the HD to be spun down much longer, even with my meager gigabyte of RAM.
A music player with some rudimentary battery-runtime awareness might also use this to pull the next few songs into RAM without having to provide it's own cache implementation.

Re:You only need 16GB of RAM for this to be useful (5, Insightful)

Kjella (173770) | more than 6 years ago | (#22810816)

Personally I just wish there was better cache hinting on current software. For example, playing a huge movie will swap out all my software to disk even though the 30GB Blu-Ray movie will likely be played start-to-finish once and give no benefit whatsoever. To the best of my knowledge (at least I've never seen it exposed to any API I've used), there's nothing like "Open, for reading, with READ cache but don't bother keeping it around in SYSTEM cache" flags.

Re:You only need 16GB of RAM for this to be useful (2, Interesting)

Ed Avis (5917) | more than 6 years ago | (#22810934)

Database systems use that sort of thing all the time, telling the kernel not to bother caching their file I/O but send it straight to disk (of course, they have their own cache configured by the database administrator). Typically if it needs to scan table more than the size of available memory, it reads the data from start to finish off the disk but doesn't cache any of it.

Re:You only need 16GB of RAM for this to be useful (5, Insightful)

QuoteMstr (55051) | more than 6 years ago | (#22810988)

See posix_fadvise. Using that API, a process can have as much control over a file as it needs; too bad the kernel does basically nothing with that information.

Re:You only need 16GB of RAM for this to be useful (5, Interesting)

Anonymous Coward | more than 6 years ago | (#22811198)

Did I hear a summer of code application?

Re:You only need 16GB of RAM for this to be useful (2, Interesting)

Dolda2000 (759023) | more than 6 years ago | (#22810890)

Yeah, imagine, then, to be able to use such a fast disk as your swap device! That'll make your system swiftz0rs. Or, hey, wait a minute...

In all honesty, though, I don't really get the point of this. Isn't the buffer cache already supposed to be doing kind of the same thing, only with a less strict mapping?

Re:You only need 16GB of RAM for this to be useful (1)

2nd Post! (213333) | more than 6 years ago | (#22811182)

Swap exists because there is not enough RAM to hold all data, so it is swapped to the HDD. We are talking about a situation where a system has so much RAM that swap is unnecessary, so instead parts of the HDD are stored in RAM instead!

That is the point. A buffer cache still requires spinning up the HDD to fill it; if this is used to replace the buffer cache, then the HDD is only spun up once, during boot, and never again except to synchronize the data in RAM to HDD.

Re:You only need 16GB of RAM for this to be useful (1)

mpapet (761907) | more than 6 years ago | (#22810918)

Except, maybe I'd like to cache the mother of all queries from my multi-terrabytes worth of DB data? I'm at least half serious. There are a number of viable scenarios where this could be great.

There must be a few more relevant applications. Pitch in!

I'm all for new ideas and getting them out there for people to test. It's one of the major benefits of open systems.

Re:You only need 16GB of RAM for this to be useful (2, Informative)

exley (221867) | more than 6 years ago | (#22810944)

Things like this [sourceforge.net] (somewhat smaller scale) already are [gentoo.org] (somewhat bigger scale) being done.

It's BEEN done, on Windows NT, since v. 3.51... (0)

Anonymous Coward | more than 6 years ago | (#22811144)

"Given that the core components of an OS are only a few GB, even 8GB systems might be able to do this, today." - by 2nd Post! (213333) on Thursday March 20, @03:38PM (#22810564) Homepage
They can, on Windows NT-based systems, since NT 3.51 iirc, in fact... done via SuperDisk &/or SuperVolume, by SuperSpeed.com (formerly EEC Systems):

http://www.superspeed.com/servers/supervolume.php [superspeed.com]

I wrote up an article for that companies' website whose ideas took them to a finalist position @ Microsoft Tech-Ed 2000-2002 iirc, in one of the harder (if NOT the hardest to win) ones to be in, SQLServer Performance Enhancement.

APK

P.S.=> That was while I was being paid to create their SuperCache/SuperCache II tuner code, which started out as a free addon to it, & then I sold they the code which made it up to 40% more efficient (because it reminded me of tuning DOS' SmartDrive, lol, & had parameterization possiblities for the driver init. stage)...

Anyhow, the article I wrote (for they, AND later, CENATEK, about their RocketDrive SSD) was a good side thing to "turn them on to" (back in 1996 in Windows NT-Pro Magazine no less to a GREAT review by Mr. John Enck, technical editor then & now (since they are Windows.NET magazine OR WindowsIT Pro mag now, not sure anymore)...

& it worked - for DATABASING people, & other things galore (serving up websites, etc. & just general HUGE reductions in latency, in almost anything you can imagine to apply them to, really)... apk

Re:You only need 16GB of RAM for this to be useful (1)

MrLogic17 (233498) | more than 6 years ago | (#22811146)

The same argument goes for the new solid-state drives that are finally becoming affordable. What's the point of swapping out RAM to a page file on a high speed flash drive?

I'm thinking that the concept of a page file is going to soon become extinct.

Random memory: Back in the day, I think I once created virtual memory on a RAM drive....

Re:You only need 16GB of RAM for this to be useful (1)

OnlineAlias (828288) | more than 6 years ago | (#22811360)


Actually, no, it doesn't. Flash drives are based on static memory and are just about as slow as a regular hard drive. This article talks about using volatile memory, which is many, many times faster.

Re:You only need 16GB of RAM for this to be useful (1)

rijrunner (263757) | more than 6 years ago | (#22811278)


      You could do it now.

    But, think a bit further about the implications of this. It isn't the OS that this is aimed at. From the OS side, it would be nice to run a lot of it in RAM, but the reality is that most of the important parts of the OS (shared libs, kernel, and whatnot) are resident in RAM most of the time anyway.

    There are a couple ways to use this just off the top of my head that might make this a more interesting thing than is presented.

    The first is simple: You could load the OS into RAM. You can then compare the image in RAM periodically against the flashed image on disk. OS related files where the binary has been exploited could be identified and isolated. Also, and here's another nice feature is that you can really maximize virtual machines. You have a single image loaded on disk for each type of OS you want to run. When you activate each partition, you need only pull from one source. The virtual machines would only worry about saving the specific info local to their configuration in their profile. (Kinda RAMDISK version of AIX's WPAR concept.)

    Secondly, the real hogs are not the OS. A lot of databases do their own memory management. Say you could ramdisk an Oracle database. That would greatly speed its access. I do really sympathize with the fears of the users, but I think adding a flash drive for the journaled that is kept current as the system runs can address some of the problems and fears.

    Hmm.. Now that I think about it, I think they missed another area completely. Why not a RAMDISK video card? Seems to me that you could start carving out chunks of RAM and cpu cycles in a multithreaded system to do the video.

Find a cure to cancer! (0, Redundant)

Doug52392 (1094585) | more than 6 years ago | (#22810572)

With all that RAM, projects like Folding@home, SETI@home, and all these distributed computing projects could have endless RAM.

We could cure diseases by doing research on those systems!

First post :)

Add-Free one-page Version of the story (4, Informative)

saibot834 (1061528) | more than 6 years ago | (#22810574)

For those of you who don't have Adblock: Printerfriendly Version [idg.com.au]

Re:Add-Free one-page Version of the story (4, Informative)

Freedom Bug (86180) | more than 6 years ago | (#22811016)

Here's a much better link on Jon Corbet's own site, the famous Linux Weekly News:

http://lwn.net/Articles/272011/ [lwn.net]

Re:Add-Free one-page Version of the story (1, Informative)

Anonymous Coward | more than 6 years ago | (#22811114)

Or you could just read the story on LWN [lwn.net] .

Out of curiosity, why the link to PC World when the summary specifically mentions that it's on LWN? If Jon get paid for the PC World version because of the ad revenue, that's fine with me. However saying that it's an LWN piece and linking to another source is a bit misrepresentative.

Memory usage (4, Interesting)

qoncept (599709) | more than 6 years ago | (#22810578)

I would think that, since we aren't even close to having boxes with more memory than we actively use, and RAM isn't growing any faster than we are using it up, that using it as a "disk" is even further off than the article would seem to imply.

Re:Memory usage (2, Insightful)

Bryansix (761547) | more than 6 years ago | (#22810682)

For some uses we use all the RAM and for others we don't. For instance I think WIN98 boot disks create a RAMDRIVE which is pretty usefull when you can't access any of your hard drives because they aren't formatted or partitioned.

Mod parent up (1)

Shandalar (1152907) | more than 6 years ago | (#22810744)

Our use of RAM as users expands about as fast as our ability to add sticks of RAM to the box. If the latter happened at 1000 times the rate of the former, then in 15 years let's talk about the luxury of wasting RAM as a disk mirror.

Re:Memory usage (5, Interesting)

wizardforce (1005805) | more than 6 years ago | (#22810796)

since we aren't even close to having boxes with more memory than we actively use
640k should be enough for anyone. you do realize that the fact that computer manufacturers are happy bundling over 2 gigs of RAM in a default install so it runs Vista all prettily gives the linux users of us a fantastic advantage when we don't use anywhere near that on a regular basis. there are already linux distros that are small enough as to be sitting entirely in RAM, some even small enough to run on the L2+3 cache if you like. being able to do things like this is going to be a major advantage.

Re:Memory usage (1)

Feyr (449684) | more than 6 years ago | (#22811276)

just leave firefox open for a week, it will happily gobble it all up

Re:Memory usage (3, Insightful)

Ephemeriis (315124) | more than 6 years ago | (#22811218)

RAM is getting cheaper every day. Capacity is constantly growing. I just bought 4 GB RAM for about the same price I paid a few years ago for 1 GB. Right now I could build a system with 16 GB RAM without breaking the bank, all from basic consumer-grade parts available on NewEgg. It isn't going to be long before we see systems with more RAM than we know what to do with. Turning a chunk of it into a big RAMdisk sounds like a good idea to me.

Re:Memory usage (1)

cgenman (325138) | more than 6 years ago | (#22811258)

This sounds a lot like google's server needs. Truly random access at high speeds.

Ram disks were available on the mac in 1990. You can get specialized rocket drives that are entirely RAM. How is this so "far off" again?

One Terabyte (4, Funny)

Cedric Tsui (890887) | more than 6 years ago | (#22810586)

One Terabyte ought to be enough for anybody.

Re:One Terabyte (4, Funny)

Captain Splendid (673276) | more than 6 years ago | (#22810606)

One Terabyte ought to be enough for anybody.

Obviously you're running windows XP, not Vista!

Re:One Terabyte (1)

Bryansix (761547) | more than 6 years ago | (#22810752)

Ya, saying one Yobabyte of RAM would be enough would have been safer.

Re:One Terabyte (1)

xgr3gx (1068984) | more than 6 years ago | (#22811444)

Oh man, you beat me to it! haha
I wanted to say "640 TB of should be enough for anybody"
Cool - Now quote me on it in 30 years.

Obligatory (1)

noidentity (188756) | more than 6 years ago | (#22810592)

Well, if the OS doesn't have to be *nix, you could run Windows Vista on it. Maybe.

Windows 7? (4, Funny)

Lectoid (891115) | more than 6 years ago | (#22810624)

See also, Windows 7 minimum requirements.

... and cue (0)

Anonymous Coward | more than 6 years ago | (#22810628)

the people who spell it "terrabyte".

Vista SP1 (3, Funny)

sakdoctor (1087155) | more than 6 years ago | (#22810634)

Is that the recommended or minimum requirement?

Re:Vista SP1 (1)

felipekk (1007591) | more than 6 years ago | (#22810756)

It's worse than that: These are the "Vista Capable Logo" numbers.

8 GB (5, Funny)

Rinisari (521266) | more than 6 years ago | (#22810644)

I have 8 GB of RAM and rarely use more than four of it unless I'm playing a 64-bit game which eats it up (Crysis). Yes, I am running both 64-bit Linux and Windows.

One time, I opened up more than a thousand tabs in Firefox just because I could.

Oh yea? (5, Funny)

SeePage87 (923251) | more than 6 years ago | (#22811078)

Well I can do cock push-ups.

Re:8 GB (1)

ls -la (937805) | more than 6 years ago | (#22811396)

I have 1 GB of RAM, and I rarely use it all up. Of course, I don't play ram-hungry games or use Vista; Those two and maybe compiling large programs are all I can think of that would need more than a gig of ram to function at a reasonable speed.

As a side note on the compiling, I'm doing a thesis on memory paging, and the largest trace we have is of compiling a linux kernel: over 4 million distinct pages, each page 4kB for a total footprint over 16GB.

Re:8 GB (1)

corychristison (951993) | more than 6 years ago | (#22811412)

I have 4GB, still two more slots for another 4GB...

How the hell do you use ~4GB? I do video encoding, compression, editing, graphics, etc. etc. all simultaneously and honestly never go above 2GB. The only time I ever go over that is when I boot up XP via VMware (ram set to use up to 1GB), although I think I've done that once since I've gotten Photoshop CS2 and Flash 8 running fine under WINE.

Power Failure (3, Informative)

Anonymous Coward | more than 6 years ago | (#22810688)

One important thing to consider, is that if using a ramdisk for important stuff, what happens when the power dies?

For example, will the stuff synced from magnetic media be stored elsewhere? If so, what happens to the speed?

-B

Re:Power Failure (5, Informative)

itsjz (1080863) | more than 6 years ago | (#22810854)

There's about three paragraphs in the article discussing this. Basically, use a UPS:

If line power goes out while ramback is running, the UPS kicks in and a power management script switches the driver from writeback to writethrough mode. Ramback proceeds to save all remaining dirty data while forcing each new application write through to backing store immediately.

As the developer puts it: (1)

mac1235 (962716) | more than 6 years ago | (#22810992)

"You just need to believe in your battery, Linux and the hardware it runs on. Which of these do you mistrust?"

Re:Power Failure (0)

Anonymous Coward | more than 6 years ago | (#22810904)

Use a UPS or battery backup. Even the Gigabyte iRAM comes with a battery backup.

Re:Power Failure (0)

Anonymous Coward | more than 6 years ago | (#22810962)

I am still wondering, how hard is it to put a rechargeable backup battery in the box for cases just like this? surely it's easier than designing DRM system & "trusted computing" platform. so why not?

Re:Power Failure (1)

Tanman (90298) | more than 6 years ago | (#22811002)

Disk saves still go to the HDD, it just keeps all the files loaded into local memory. So, a power failure is no more or less catastrophic assuming you regularly press CTRL+S.

With that much RAM... (2, Insightful)

erroneus (253617) | more than 6 years ago | (#22810716)

...I might be able to run Vista!!! (I wonder how many people have written this prior to me already?)

It's a lot of RAM and at today's computational speeds, it's not likely that it could be used for anything beyond a RAM drive.

Is it too soon to think about how to use that much RAM? NO! It's the lack for forward thinking that caused a lot of artificial limitations that have been worked around in the past. We're still dealing with limitations on file systems and the like. I've got an old Macintosh that can't access more than 128GB or something like that because its BIOS can't handle it... I had to get another PCI controller installed to handle larger drives.

What it is time to think about is now to code without such limitations built-in. This would better enable things to grow more easily and naturally.

The problem with giving Windows 1TB... (4, Funny)

Gybrwe666 (1007849) | more than 6 years ago | (#22810748)

The System Tray would end up filling most of my dual monitors with all the crap Microsoft will inevitably find "necessary" to run the OS, leaving me with a small, 640x480 patch and approximately 640k for applications.

Re:The problem with giving Windows 1TB... (1)

Bryansix (761547) | more than 6 years ago | (#22810822)

If you run MS SQL Server and don't manage the RAM then it will use it all just for the fun of it.

Re:The problem with giving Windows 1TB... (3, Informative)

W2k (540424) | more than 6 years ago | (#22810888)

If you run MS SQL Server and don't manage the RAM then it will use it all just for the fun of it.

If you find this in any way strange, wrong or confusing, perhaps you should read up as to what the primary purpose of a frikkin' DATABASE SERVER is.

Here's a hint: the more data it can keep readily accessible (that is, in RAM) the better it will perform. And as you mentiones, you can of course set it to use less RAM if you have to. It's just that it's optimized for performance by default.

Re:The problem with giving Windows 1TB... (1)

Bryansix (761547) | more than 6 years ago | (#22811118)

No, I know that it optimizes for performance. What I don't understand is how a 128k database with no logs and no users would still need to use up a Terabyte of RAM. It even does this to the detriment of the console session of the OS GUI. It's a Microsoft product and it isn't even smart enough to be aware that windows might need some RAM to function correctly.

uh - there is at least one system with 1TB of RAM (5, Informative)

Anonymous Coward | more than 6 years ago | (#22810766)

You wrote: "We haven't yet reached a point where systems, even high-end boxes, come with a terabyte of installed memory" - this is not true. Sun's E25k can go over 1TB of memory.....

How ? (5, Funny)

herve_masson (104332) | more than 6 years ago | (#22810770)

// Use 1TB of RAM
char *ptr=malloc(1099511627776);
memset(ptr,1,1099511627776);

Re:How ? (1, Insightful)

Anonymous Coward | more than 6 years ago | (#22810872)

Cute, but there's no guarantee any of that malloc'ed space will actually be in RAM. The OS might decided to dump all that to a HDD or somesuch.

Re:How ? (1)

hey! (33014) | more than 6 years ago | (#22811112)

Sure there is; just "/sbin/swapoff -a", and there's no backing store.

Re:How ? (1)

anonypus_user (1236548) | more than 6 years ago | (#22811186)

::wishes i understood::

nothing new here (3, Informative)

dltaylor (7510) | more than 6 years ago | (#22810790)

Linux gobbles free RAM to add to the buffer cache. This is already a large RAM disk with automatic sync. In embedded systems, you can even decouple the buffer cache from any physical media and just live in a variable size RAM disk, which means that Linux finally catching up to AmigaDOS.

1TB of RAM is available today in a server (1)

sf_basilix (821795) | more than 6 years ago | (#22810832)

I believe Sun has a server that can do 1TB of memory.

Re:1TB of RAM is available today in a server (2, Informative)

thinduke (150173) | more than 6 years ago | (#22811026)

IBM p595 [ibm.com] can have 1TB of RAM too. And yes, they run Linux.

Re:1TB of RAM is available today in a server (1)

Nearspace (825338) | more than 6 years ago | (#22811138)

"or up to 2TB of DDR2 memory running at 400 MHz"

fascinating.

Re:1TB of RAM is available today in a server (2, Informative)

thatskinnyguy (1129515) | more than 6 years ago | (#22811050)

Something like this one [sun.com] ?

1TB RAM! (1)

nabil2199 (1142085) | more than 6 years ago | (#22810868)

so now we can run solaris

Re:1TB RAM! (1)

ls -la (937805) | more than 6 years ago | (#22811454)

so now we can run vista
There, fixed that for you.

Didn't someone say once... (1)

michaela (31955) | more than 6 years ago | (#22810880)

that 640GB should be enough?

(Yes, I know he denies actually saying it, but we all know it's true anyway.)

What about copy-on-write for executables? (3, Interesting)

Enleth (947766) | more than 6 years ago | (#22810906)

I'm using regular ramdisks initalized with data on bootup, composited with temporary, empty disk partitions using unionfs and synchronized back to their real partitions on powerdown, so that I have an extremely fast read time for most things contained on such a disk and conventional write-reread times. However, the problem is that for the upper layers of the kernel, those ramdisks are not RAM at all, just some other block device around - and when it comes to loading executables and libraries, they are copied, well, from memory to memory. What's missing is some way to tell the damn thing to use the data pages that already are there and issue a copy-on-write only when required. If this mechanism can do that - well, I'll be in as soon as they make it a little bit more fault-tolerant.

How is this different.... (1)

wowbagger (69688) | more than 6 years ago | (#22810920)

How is this different from the already existing kernel VFS buffer store, other than for the repopulation at startup?

Could you not accomplish this much more simply by having a process read all the blocks in a given block device at startup, thus faulting everything into the kernel buffer cache?

Re:How is this different.... (1)

arcade (16638) | more than 6 years ago | (#22810990)

It doesn't guarantee to sync your data to disk, only to the ramdisk. It will _Attempt_ to sync the data to disk, but it won't block to do so.

This means that both all your read and all your write operations will go splendidly fast.

It also means that you lose if you have a sudden powerloss. But, in many situations, that might actually not matter so much compared to the speed advantage you get out of this.

Cache as RAM, RAM as hard disk (1)

doojsdad (1162065) | more than 6 years ago | (#22810924)

Just get rid of the external hard disk as a storage mechanism all together. Use the RAM as the 'hard disk', create a large L3 cache on the CPU that directly caches the RAM, and the L1 and L2 cache can cache the L3 cache. No problem.

Re:Cache as RAM, RAM as hard disk (1)

exley (221867) | more than 6 years ago | (#22810972)

Until the power goes out or a reboot is needed... :)

Re:Cache as RAM, RAM as hard disk (1, Funny)

Anonymous Coward | more than 6 years ago | (#22811006)

Don't forget to add a racing stripe. It will make it go faster too.

Not quite understanding... (1)

Junta (36770) | more than 6 years ago | (#22810982)

The analysis thankfully makes a comparison to the IO caching that happens nominally. The distinction seems to be that this 'innovation' makes calling 'sync' a lie. That just doesn't seem like a good thing. It seems a roundabout way to make sync a lie as well.

I put in 16 GB of ram in a system, and operations are quite snappy, the disk cache happily filling and draining, and it feels more or less like a ramdisk system, once the data has been read into memory the first time on read operations. Sure, sync takes an ungodly amount of time, but that only happens when something wants to make damn sure the system is ready to tolerate an unfortunate event after something important happens.

The sync is a lie!! (0)

Anonymous Coward | more than 6 years ago | (#22811130)

Anyway this sync is great,
It's so delicious and moist...

Give it three or four years, I'd say. (1)

jcr (53032) | more than 6 years ago | (#22811008)

We have desktop systems now that can go up to 32GB RAM, so 1TB isn't that far off.

-jcr

Not so far off (3, Interesting)

Guspaz (556486) | more than 6 years ago | (#22811018)

Current high-end server boards support up to 64GB of RAM (16 slots, 4GB DIMMs).

By Moore's Law, we should hit 1TB in a high-end server 6 years, high-end desktops (assume 8GB of RAM, currently selling for $180 CAD) in 10.5 years, and the average midrange desktop (assume 2GB of RAM, currently selling for $45 CAD) in 13.5 years.

We might be a while off in consumer applications, but for high-end servers, 6 years doesn't seem very far away.

More things change, the more they stay the same... (0)

Anonymous Coward | more than 6 years ago | (#22811080)

Anyone remember RAM disks from the DOS days? This was the same thing, when we had excess RAM, we would load files into memory. Next, I bet someone is going to tell me that thick-client is more efficient than thin-client...

Video Streaming Server (3, Interesting)

JoeRandomHacker (983775) | more than 6 years ago | (#22811084)

Check out the specs on the Motorola (formerly BroadBus) B-1 Video Server:

http://www.motorola.com/content.jsp?globalObjectId=7727-10991-10997 [motorola.com]

Sounds like a good use for a terabyte of RAM to me.

Disclosure: I currently work for Motorola, but I don't speak for them, and don't have any involvement with this product beyond salivating over it when it was announced that we were buying BroadBus.

We'll be there soon enough. (2, Interesting)

darkmeridian (119044) | more than 6 years ago | (#22811108)

Ten years ago, my PC had 8 megs of system RAM. My laptop now has four gigs of RAM. In ten more years, I am sure we'll have a terabyte of RAM.

take it to the next step... (4, Interesting)

ecloud (3022) | more than 6 years ago | (#22811110)

If you are planning on having a few minutes' worth of UPS backup then why would you need to write to the hard drive continuously? Keep the hard drive spun down (saving power). If the system is being shut down, or AC power fails, then spin up the drive and make a backup of your ramdisk, thus being ready to restore when the power comes back up.

Next step beyond that: stop using a filesystem at runtime. Just assume your data can all fit in memory (why not, if you have a terabyte of it?) This simplifies the code and prevents a lot of duplication (why copy from RAM to RAM, just to make the distinction that one part of RAM is a filesystem and another part is the working copy?) But you will need a simple way to serialize the data to disk in case of power-down, and a simple way to restore it. This does not need to be a multi-threaded, online operation: when the system is going down you can cease all operations and just concentrate on doing the archival.

This assumption changes software design pretty fundamentally. Relational databases for example have historically been all about leaving the data on the disk and yet still fetching query results efficiently, with as little RAM as necessary.

Next step beyond that: system RAM will become non-volatile, and the disk can go away. The serialization code is now used only for making backups across the network.

Now think about how that could obsolete the old Unix paradigm that everything is a file.

Billy Ram (0, Troll)

echogen (1166581) | more than 6 years ago | (#22811148)

Don't worry about your 1TB RAM... Microsoft would have the right tools to overflow it by that time!

Windows 3.1 can't even address that much memory (1)

SlappyBastard (961143) | more than 6 years ago | (#22811150)

Geez. Why would I ever need it!?!

If you ever want a fast OS, run Windows 3.1 on a 300 MHz P2 with 64 Mb of RAM. Blazing fast.

Let's get to 128 Gb of RAM before we start pimping 1 Tb.

Am I alone in thinking? (1)

UnknowingFool (672806) | more than 6 years ago | (#22811166)

The first thing I thought of was pr0n. Is that so wrong?

IBM System p can have 2 TB of RAM (1)

The Mad Duke (222354) | more than 6 years ago | (#22811172)

The IBM System p model 595 can hold 2 TB of RAM with 64 processors. I just got done installing on 7 of these boxes which had 1 TB in each. These servers can run AIX or Linux, but you gotta use AIX if you need lots of memory in a partition. FYI !
- The Mad Duke

It's easy to use a TB of RAM (0, Troll)

gujo-odori (473191) | more than 6 years ago | (#22811180)

Want to use a TB of RAM? It's simple - just install Vista and that terabyte'll be used up before you can say "640K ought to be enough for anybody" :)

Re-inventing the disk cache wheel (3, Interesting)

flyingfsck (986395) | more than 6 years ago | (#22811204)

Geez, I wrote a floppy disk cache driver as a programming homework exercise in the 1980s. Talk of re-inventing the wheel...

Stop thinking in terms of caching? (1)

Doctor Faustus (127273) | more than 6 years ago | (#22811206)

When I started my programming career (1997), my employer had 3-4 servers, the newest of which had a RAID array of Micropolis drives totaling a staggering 18GB for the volume. The older servers had 6GB and 9GB volumes. While we did have to take a bit more care then than now to conserve space, that was enough for an awful lot of tasks.

If I'm reading the specs right, you can now get parts for a PC with 12GB of RAM (mixing DDR2 and DDR3) from NewEgg for something on the order of $1000. While I wouldn't suggest making a file server that just works in RAM (what if you lose power?), what about databases? Modern database servers write to the transaction log (on disk) before they do anything else so that their caching logic can write the changes themselves to disk whenever it's convenient. Why not try a database where the tables themselves aren't on disk at all? Put BLOB fields into actual files, and keep the transaction log on disk (RAID 1), but otherwise the database only exists in RAM. If you need to restart, you just process all the transactions that happened since the last complete backup.

Now, this wouldn't be big enough for everything, but it would be big enough for an awful lot of jobs (stop and think for a minute about just how much information 12GB really is), and it would allow quite good performance on some pretty cheap hardware. And who knows? When you start wanting to support more than will fit in RAM, maybe virtual memory will turn out to be a better model than disk caching.

You can get 2TB in a server today ... (0)

Anonymous Coward | more than 6 years ago | (#22811228)

Finally, (1)

wozzinator (1079319) | more than 6 years ago | (#22811232)

I can bucket sort over 1099511600000 integers in it's worst case run.

Yes I could Use it. (1)

jshriverWVU (810740) | more than 6 years ago | (#22811260)

During games or analysis I could store the entire 3-6men endgame table bases in memory and get rid of the bottleneck that a HD is when doing a lot of searching in a 1.5 TB dataset. So yes, it could be useful to some people. Perhaps not mom and pop who check email, but researchers who crunch large datasets.

Fuzzy difference (1)

Tablizer (95088) | more than 6 years ago | (#22811314)

Perhaps its time to not make a hard distinction in software between RAM and disk. I know that RAM caching sort of does this, but software still assumes a difference between the two. It may be time to come up with a "generic storage model" of some sort that does not assume RAM or disk. This way when one or the other changes, or an intermediate option (flash RAM?) comes along, the software will be ready. Of course, there may be some overhead in putting an abstraction layer between storage calls, but as time goes on, we usually march up the abstraction ladder anyhow.

The closest thing I worked with to this was Clipper. It would automatically cache data tables in RAM if they fit, otherwise use regular disk-based indexing/searching (most RDBMS do this now, but it is hard "see" it happening because they're on a busy server in a different room). I just "talked" to the tables and didn't worry about whether they were using RAM or disk or a combo because the internals managed that. And they were pretty fast too if lots of a table could be cached.

Process size limit question (1)

smorken (990019) | more than 6 years ago | (#22811374)

Does this require multiple processes to run? From what I understand, current Linux kernels have a ~2GB process size limit.

Systems with 1TB of memory are for sale now (0)

Anonymous Coward | more than 6 years ago | (#22811388)

We haven't yet reached a point where systems, even high-end boxes, come with a terabyte of installed memory


Yes we have. You can buy a system with 8 terabytes of RAM
here

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>