Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Data Storage Technology

Reliability of Computer Memory? 724

olddoc writes "In the days of 512MB systems, I remember reading about cosmic rays causing memory errors and how errors become more frequent with more RAM. Now, home PCs are stuffed with 6GB or 8GB and no one uses ECC memory in them. Recently I had consistent BSODs with Vista64 on a PC with 4GB; I tried memtest86 and it always failed within hours. Yet when I ran 64-bit Ubuntu at 100% load and using all memory, it ran fine for days. I have two questions: 1) Do people trust a memtest86 error to mean a bad memory module or motherboard or CPU? 2) When I check my email on my desktop 16GB PC next year, should I be running ECC memory?"
This discussion has been archived. No new comments can be posted.

Reliability of Computer Memory?

Comments Filter:
  • Surprise? (Score:4, Funny)

    by Anonymous Coward on Monday March 30, 2009 @02:00AM (#27384775)

    Recently I had consistent BSODs with Vista64 on a PC with 4GB...

    This was a surprise?

    • Re:Surprise? (Score:5, Informative)

      by Erik Hensema ( 12898 ) on Monday March 30, 2009 @03:39AM (#27385223) Homepage
      Yes. Vista is rock solid on solid hardware. Seriously. Vista is as reliable as Linux. Some people wreck their vista installation, some people wreck their Linux installation.
      • Re: (Score:3, Insightful)

        by Starayo ( 989319 )
        It won't crash often if at all, it's true, but vista is way too slow for my, and many other's tastes.
      • Re:Surprise? (Score:5, Informative)

        by bigstrat2003 ( 1058574 ) * on Monday March 30, 2009 @04:13AM (#27385387)
        Agreed. People who will sit and tell me with a straight face that Vista, in their experience, is unstable are either very unlucky, or liars. Windows stopped being generally unstable years ago. Get with the times.
        • Re:Surprise? (Score:5, Insightful)

          by isorox ( 205688 ) on Monday March 30, 2009 @06:49AM (#27386093) Homepage Journal

          Agreed. People who will sit and tell me with a straight face that Vista, in their experience, is unstable are either very unlucky, or liars. Windows stopped being generally unstable years ago. Get with the times.

          I'm not convinced, I have a fairly old desktop at work I keep for Outlook use only. After a few days outlook's toolbar becomes unresponsive, and whenever I shut it down it stalls and requires a poweroff. Task manager doesn't say I'm using that much memory (still got cached files in physical ram).

          I don't use windows much, I'm not used to the tricks that keep it running, where I probably use those tricks subconciously to keep my linux workstation and laptop running.

          I wonder if Windows continued increase in stability is, at least partly, people subconciously learning how to adapt to it.

          • Re:Surprise? (Score:5, Interesting)

            by Lumpy ( 12016 ) on Monday March 30, 2009 @09:07AM (#27387003) Homepage

            Vista can hose it's user profiles easily and they get the white scrteen loading bug that causes lots of problems and even networking to fail for that user.

            It's a profile problem that can be fixed easily by creating a new profile and deleting the old one, but that is way out of the ability of most users.

            This happens a LOT with home users. Out of the last 30 vista support calls I got 6 were this problem of corrupt user profiles.

            Honestly user profiles under Windows have sucked cince the 2000 days.

        • Re: (Score:3, Insightful)

          by jamesh ( 87723 )

          A guy at work got his laptop with Vista on it. Explorer would hang often (Explorer, not IE), and if he tried to arrange his second monitor to the left of his laptop screen, the system would BSoD. (pretty funny, he had his monitor on the left, due to physical desk constraints, but he had to move his mouse off of the right side of his laptop screen where it would appear on the left of his second monitor...). We updated all the latest drivers from HP but to no avail.

          Since putting Vista SP1 on though it has bee

        • Re:Surprise? (Score:5, Insightful)

          by MobyDisk ( 75490 ) on Monday March 30, 2009 @07:47AM (#27386357) Homepage

          People who will sit and tell me with a straight face that Vista, in their experience, is stable are either very lucky, or Microsoft shills.

          See? I can say the opposite, and provide just as much evidence? Do I get modded to 5 as well? Where's your statistics on the stability of Vista? Did it work well for you, therefore, it works well for everyone else?

          I worked for a company that bought a laptop of every brand, so that when the higher-ups went into meetings with Dell, HP, Apple, etc. they had laptops that weren't made by a competitor. They have had problems like laptops not starting-up the first time due to incompatible software. That was a recent as 6 months ago. My mother-in-law bought a machine that has plenty of Vista-related problems (audio cutting out, USB devices not working, random crashes in explorer) on new mid-range hardware that came with Vista. But I have a neighbor who found it fixed lots of problems with gaming under XP.

          There's plenty of issues. Vista's problems weren't just made-up because you didn't experience them.

          Everybody's experience is different. Quit making blanket statements based on nothing.

          • Re: (Score:3, Insightful)

            by kentrel ( 526003 )

            Looks like he posted his opinion based on his experience, and you posted your opinion based on your experience. So you should quit making blanket statements based on nothing too.

            Neither of you posted statistics. Where are yours?

          • Re:Surprise? (Score:5, Insightful)

            by Simetrical ( 1047518 ) <Simetrical+sd@gmail.com> on Monday March 30, 2009 @11:06AM (#27388509) Homepage

            I worked for a company that bought a laptop of every brand, so that when the higher-ups went into meetings with Dell, HP, Apple, etc. they had laptops that weren't made by a competitor. They have had problems like laptops not starting-up the first time due to incompatible software. That was a recent as 6 months ago. My mother-in-law bought a machine that has plenty of Vista-related problems (audio cutting out, USB devices not working, random crashes in explorer) on new mid-range hardware that came with Vista. But I have a neighbor who found it fixed lots of problems with gaming under XP.

            On the other hand, my Linux server freezes up and needs to be reset (sometimes even reboot -f doesn't work) every few days due to a kernel bug, probably some unfortunate interaction with the hardware or BIOS. (I'm using no third-party drivers, only stock Ubuntu 8.04.) And hey, in the ext4 discussions that popped up recently, it emerged that some people had their Linux box freeze every time they quit their game of World of Goo. Just yesterday I had to kill X via SSH on my desktop because the GUI became totally unresponsive, and even the magic SysRq keys didn't seem to work. Computers screw up sometimes.

            What's definitely true is that Windows 9x was drastically less stable any Unix. Nobody could use it and claim otherwise with a straight face. Blue screens were a regular experience for everyone, and even Bill Gates once blue-screened Windows during a freaking tech demo.

            This is just not true of NT. I don't know if it's quite as stable as Linux, but reasonably stable, sure. Nowhere near the hell of 9x. I used XP for several years and now Linux for about two years, and in my experience, they're comparable in stability. The only unexpected reboots I had on a regular basis in XP was Windows Update forcing a reboot without permission. Of course there were some random screwups, as with Linux. And of course some configurations showed particularly nasty behavior, as with Linux (see above). But they weren't common.

            Of course, you're right that none of us have statistics on any of this, but we all have a pretty decent amount of personal experience. Add together enough personal experience and you get something approaching reality, with any luck.

        • Re: (Score:3, Interesting)

          "Windows stopped being generally unstable years ago."

          Agreed. They have moved away from generalization to specialization now, and Vista is much more specific about how, when, and where it is unstable. Essentially, they pushed the crashes out of the kernel, and all the applications now act funny or crash instead of crashing the kernel.

          "People who will sit and tell me with a straight face that Vista, in their experience, is unstable are either very unlucky ..."

          Saying they are unlucky, when they are unfortuna

        • Re: (Score:3, Interesting)

          by poetmatt ( 793785 )

          What?

          Vista is not 100% stable, never has been, obviously never will be. Do you think it's magically immune to its own BSOD's? I run Vista 64bit myself, and it's "better than XP", but not stable. Apps still get random errors, etc.

          Windows is as stable as it will ever be; at least with Ubuntu you can have a month's uptime and be fine. Now if only Wine was 100% there for gaming (it's getting there).

          • Re: (Score:3, Informative)

            Nothing is 100% stable. That's an awfully high standard to reach. And I get uptimes of a month on my Vista machine too, so I fail to see how you're demonstrating a point of how Windows is so far behind.
          • Re: (Score:3, Informative)

            by jonbryce ( 703250 )

            I get uptimes of 4-5 weeks on Vista. I have to reboot on the Wednesday after the second Tuesday every month for updates.

            I have an uptime of about 6 months on Ubuntu since the last time I rebooted to put an extra hard drive in. I don't have to reboot for updates.

        • Re:Surprise? (Score:5, Insightful)

          by unoengborg ( 209251 ) on Monday March 30, 2009 @09:05AM (#27386973) Homepage

          You are right and you are wrong. Yes, it's true that Vista, XP or even Windows 2k are rock solid, but only as long as you don't add third party hardware driveres of dubious quality. Unfortunately many hardware venders don't spend as much effort as they should to develop good drivers. Just using the drivers that comes with windows leaves you with a rather small set of supported hardware, so people install whatever drivers that comes with the hardware they buy, and as a result they get BSOD if they are unlucky, and then they blame Microsoft.

        • Re:Surprise? (Score:5, Insightful)

          by stuartkahler ( 569400 ) on Monday March 30, 2009 @09:59AM (#27387585)
          Or they're running crappy hardware. Most people blame Windows when their hardware is constantly running on the edge of failure. They have a computer that works fine out of the box, but crashes when the PSU can't keep up with the fifth USB device plugged in. Maybe some heat sinks are clogged with dust.

          The OS running on the cheapest hardware with the most clueless user base has the highest failure rate? You don't say!
      • Re:Surprise? (Score:4, Informative)

        by c0p0n ( 770852 ) <copong@noSpAM.gmail.com> on Monday March 30, 2009 @04:26AM (#27385443)

        I fail to see how the parent is a troll, regardless of whether he is right or not.

        Nevertheless my experience with Vista is the same, I run home premium on a newish laptop I use for music production and haven't had a glitch on it for months. My first intention was to wipe out the drive and install XP, but I abandoned the idea some time ago.

        • Re:Surprise? (Score:4, Informative)

          by Erik Hensema ( 12898 ) on Monday March 30, 2009 @04:57AM (#27385591) Homepage

          I fail to see how the parent is a troll, regardless of whether he is right or not.

          That's because I wasn't trolling. Yes, I do know people here on slashdot don't like to hear positive opinions on Vista, but in fact Vista isn't all that bad.

          I use Linux exclusively on my desktop pc at home and at work. I've been using Linux for over a decade. When I bought a laptop a year and a half ago, it came with Vista. Vista is IMHO a great improvement over XP. It's not even slow on decent hardware.ÂI have yet to receive my first BSOD since SP1 was released. SP0 gave me a few BSODs, maybe 5 in total.

          That being said, I use Linux for work and Vista for play. So the comparison may not be entirely fair.

          • by Dan East ( 318230 ) on Monday March 30, 2009 @06:10AM (#27385937) Journal

            In reference to the parent, gp, ggp, etc. Either I'm reading the alternate-reality edition of Slashdot, or y'all are warming up for Wednesday.

        • Re:Surprise? (Score:5, Insightful)

          by erroneus ( 253617 ) on Monday March 30, 2009 @06:49AM (#27386089) Homepage

          I find that when a Windows machine, from Windows 2000 on up, when taken care not to install too many programs and/or immature or junk-ware, then Windows remains quite stable and usable. The trouble with Windows is the culture. It seems everything wants to install and run a background process or a quick-launcher or a taskbar icon. It seems many don't care about loading old DLLs over newer ones. There is a lot of software misbehavior in Windows-world. (To be fair, there is software misbehavior in MacOS and Linux as well, but I see it far less often.) But Windows by itself is typically just fine.

          Since the problem is Windows culture and not Windows itself, one has to educate one's self in order to avoid the pitfalls that people tend to associate with Windows itself.

      • Re: (Score:3, Interesting)

        "Vista is as reliable as Linux."

        I can definately attest to this fact! The family computer has dual boot with Vista (It shipped with the 64 bit machine, and is 32 bit of course) and Mandriva Linux 2009 x86_64. Vista has been used to view Oprah's website with it's proprietary garbage, but other than that is unused and unmolested. It is a stock install. No third party stuff has been added other than iTunes. I recently had to install iTunes to restore my ipod after trashing the filesystem, and I can tell y

  • Memtest not perfect. (Score:5, Informative)

    by Galactic Dominator ( 944134 ) on Monday March 30, 2009 @02:01AM (#27384781)

    My experience with memtest is you can trust the results if it says the memory is bad, however if the memory passed it could still be bad. Troubleshooting your scenario should involve replacing the DIMM's in questions with known good modules while running Windows.

    • by 0100010001010011 ( 652467 ) on Monday March 30, 2009 @02:08AM (#27384819)

      I bet Windows will love you replacing the DIMM's while running.

      • by Anthony_Cargile ( 1336739 ) on Monday March 30, 2009 @02:39AM (#27384963) Homepage

        I bet Windows will love you replacing the DIMM's while running.

        Yeah wait until it starts to sleep first, or even better if you catch it while hibernating

        • One eye open! (Score:3, Informative)

          by camperdave ( 969942 )
          ...if you catch it while hibernating

          Be careful. Vista hibernates with one eye open. It can wake itself up from hibernation to do updates. I dual boot my laptop with Linux Mint (an Ubuntu variant). Every week, I'd go to turn on my computer only to find that the battery was dead. Checking the startup logs showed that linux was starting up at about 3:00 in the morning. After googling, I found out that many people were having that problem. The suggested solution was to turn off Vista automatic updates
    • by mrmeval ( 662166 ) <jcmeval@NoSPAM.yahoo.com> on Monday March 30, 2009 @02:11AM (#27384839) Journal

      I've yet to see memtest86 find an error even though replacing the ram fixed the problem. This has been on several builds.

      • by Antidamage ( 1506489 ) on Monday March 30, 2009 @02:59AM (#27385047) Homepage

        I've often had it pick up bad ram, usually within the first five minutes. One time, the memory in question had been through a number of unprotected power surges. The motherboard and power supply were dead too.

        You can reliably replicate my results by removing the ram, snapping it in half and putting it back in. No need to wait for a power surge to see memtest86 shine.

        • Re: (Score:3, Funny)

          by machine321 ( 458769 )

          That's impressive. Most memory tester software so I've tried requires a working power supply and motherboard.

      • by blackest_k ( 761565 ) on Monday March 30, 2009 @05:53AM (#27385865) Homepage Journal

        i've seen memtest find an error and yes the ram was bad.

        There is a bit of a difference between ram use on linux and windows desktops, Linux tends to require less ram than a windows system to run, windows is far more likely to use all your ram and page out. In day to day use rarely do my linux systems need to use the swap file. If some of your ram is faulty and never gets used then you will not see crashes. I'm sure most of us have juggled ram about finding swapping slots cures the problem although reseating ram can fix problems anyway. If memtest is showing problems then the ram has problems bare in mind that some tests performed can pass with later tests failing.
        memtest is to prove ram to be bad, not good. At higher temperatures than the testing was performed at the ram may become unreliable. It might be the case that the ram is ok in some systems but not in others, I've seen that too.

      • by Joce640k ( 829181 ) on Monday March 30, 2009 @07:52AM (#27386389) Homepage

        I've had a lot more success with Microsoft's RAM tester, free download here: http://oca.microsoft.com/en/windiag.asp [microsoft.com]

        See, good things do come out of Redmond!

      • Re: (Score:3, Interesting)

        by mysticgoat ( 582871 )

        I run memtest86 overnight (12+ hrs) as a routine part of the initial evaluation of a sick machine. Occasionally it finds errors after several hours that were not present on a single pass test. The last instance was a few months ago: a single stuck bit in one of the progressive pattern memory tests that only showed up after 4+ hours of repetitive testing. Replacing that mem module cured WinXP of a lot of weird flakey behavior involving IEv7 and Word.

        The overnight memtest86 runs have only kicked out errors

    • by Anonymous Coward on Monday March 30, 2009 @02:14AM (#27384849)

      Another nice tool is prime95. I've used it when doing memory overclocking and it seemed to find the threshold fairly quickly. Of course your comment still stands - even if a software tool says the memory is good, it might not necessarily be true.

    • by Hal_Porter ( 817932 ) on Monday March 30, 2009 @03:13AM (#27385115)

      Memtestx86 is bögus. My machine alwayS generated errors when I run the test but it works fOne otherwise ÿ

    • by NerveGas ( 168686 ) on Monday March 30, 2009 @03:37AM (#27385215)

      +1. I once had a pair of DIMMs which would intermittently throw errors in whichever machine they were placed, but Memtest would never detect anything wrong with them - even if used for weeks.

      I called Micron, and they said "Yes, we do see sticks that go bad and Memtest won't detect it." They replaced them for free, the problem went away, and I was happy.

    • by Idaho ( 12907 ) on Monday March 30, 2009 @04:20AM (#27385409)

      My experience with memtest is you can trust the results if it says the memory is bad, however if the memory passed it could still be bad.

      I wonder how strongly RAM stability depends on power fluctuations. While you're testing memory using Memtest, the GPU is not used at all, for example. When playing a game and/or running some heavy compile-jobs, on the other hand, overall power usage will be much higher. I wonder if this may reflect on RAM stability, especially if the power supply is not really up to par?

      If so, you might never find out about such a problem by using (only) memtest.

      • by Znork ( 31774 ) on Monday March 30, 2009 @05:19AM (#27385699)

        A lot. When AM2 boards were new I went through a bunch of bad RAM (memory manufacturers hadn't quite gotten their act together yet) and RAM voltage would significantly change the number of bits that were 'bad'. 1.9 V and there were a few bits bad, 1.85, some more, 1.8 and memtest would light up all over.

        So certainly, if any component is subpar, even a slight power fluctuation could trigger a borderline bad bit.

      • by Sen.NullProcPntr ( 855073 ) on Monday March 30, 2009 @05:39AM (#27385807)

        While you're testing memory using Memtest, the GPU is not used at all, for example. When playing a game and/or running some heavy compile-jobs, on the other hand, overall power usage will be much higher.

        I think memtest is a good first level test - it will pinpoint gross errors in memory. But probably won't detect more subtle problems. For me the best extended test is to enable all the opengl screen savers and let the system run overnight cycling through each of them. If the system doesn't crash with this it will probably be solid under a normal load. For me this has been the best test of overall system stability. Unfortunately if it fails won't know exactly what is wrong.

  • by Anonymous Coward on Monday March 30, 2009 @02:01AM (#27384785)

    wrap your _whole_ computer in tinfoil to deflect those pesky cosmic rays. it also works to keep them out of your head too.

  • Error response (Score:5, Informative)

    by greg1104 ( 461138 ) <gsmith@gregsmith.com> on Monday March 30, 2009 @02:10AM (#27384829) Homepage

    If a system gives memtest86 errors, I break it down and swap components until it doesn't. The test pattern it uses can find subtle errors you're unlikely to run into with any application-based testing even when run for a few days. Any failures it reports should be taken seriously. Also: you should pay a attention to the memory speed value it reports, that's a surprisingly effective simple benchmark for figuring out if you've setup your RAM optimally. The last system I built, I ended up purchasing 4 different sets of RAM, and there was about a 30% delta between how well the best and worst performed on the memtest86 results--correlated extremely well with other benchmarks I ran too.

    At the same time, I've had memory that memtest86 said was fine, but the system itself still crashed under a heavy Linux-based test. I consider both a full memtest86 test and a moderate workload Linux test to be necessary before I consider a new system to have baseline usable reliability.

    There are a few separate problems here that are worthwhile to distinguish among. A significant amount of RAM doesn't work reliably when tested fully. Once you've culled those out, only using the good stuff, some of that will degrade over time to where it will no longer pass a repeat of the initial tests; I recently had a perfectly good set of RAM degrade to useless in only 3 months here. After you take out those two problematic sources for bad RAM, is the remainder likely enough to have problems that it's worth upgrading to ECC RAM? I don't think it is for my home systems, because I'm OK with initial and periodic culling to kick out borderline modules. And things like power reliability cause me more downtime than RAM issues do. If you don't know how or have the time to do that sort of thing yourself though, you could easily be better off buying more redundant RAM.

  • Answers (Score:5, Interesting)

    by jawtheshark ( 198669 ) * <slashdot@nosPAm.jawtheshark.com> on Monday March 30, 2009 @02:10AM (#27384835) Homepage Journal

    1) Yes

    2) No

    Now to be serious. Home PC do not come yet with 6GB or 8GB. Most new home PC still seem to have between 1GB and 4GB. Where the 4GB variety is rare because of the fact that most home PCs still come with a 32-bit operating system. 3GB seems to be the sweet spot for higher-end-home-pcs. Your home PC will most likely not have 16GB next year. Your workstation at work, perhaps, but then even perhaps.

    At the risk of sounding like "640KByte is enough for everyone", I have to ask why you think why you need 16GB to check your email next year. I'm typing this on a 6 year old computer, I'm running quite a few applications at the same time and I know a second user is logged in. Current memory usage: 764Meg RAM. As a general rule, I know that Windows XP runs fine on 512Meg RAM and is comfortable with 1GB RAM. The same is true for GNU/Linux running Gnome.

    Now, at work with Eclipse loaded, a couple of application servers, a database and a few VMs... Yeah, there indeed you get memory starved quickly. You have to keep in mind that such usage pattern is not that of a typical office worker. I can imagine that a heavy Photoshop user would want every bit of RAM he can get too. The Word-wielding-office-worker? I don't think so.

    Now, I can't speak for Vista. I heard it runs well on 2GB systems, but I can't say. I got a new work laptop last week and booted briefly in Vista. It felt extremely sluggish and my machine does have 4Gig RAM. Anyway, I didn't bother and put Debian Lenny/amd64 on it and didn't look back.

    I my idea, you have quite a twisted sense of reality regarding to the computers people actually use.

    Oh, and frankly... If cosmic rays would be a big issue by now with huge memories, don't you think that more people would be complaining? I can't say why Ubuntu/amd64 ran fine on your machine. Perhaps GNU/Linux has built-in error correction and marks bad RAM as "bad".

    • Re:Answers (Score:5, Informative)

      by bdsesq ( 515351 ) on Monday March 30, 2009 @03:59AM (#27385303)

      ... 3GB seems to be the sweet spot for higher-end-home-pcs.

      3GB is not so much a "sweet spot" as it is a limitation based on a 32 bit OS.
      You can address 4GB max using 32 bits. Now take out the address space needed for your video card and any other cards you may put on the bus and you are looking at a 3GB max for useable memory.
      So instead of "sweet spot" you really mean "maximum that can be used by Windows XP 32 Bit (the most commonly used OS today).

      • Re:Answers (Score:5, Informative)

        by megabeck42 ( 45659 ) on Monday March 30, 2009 @04:29AM (#27385459)

        Just FYI, 32bit Intel processors from the Pentium Pro generation and forward (with the exception of most, if not all of the Pentium-M's) have 36 physical address pins or more?

        Many, but not all, chipsets have a facility for breaking the physical address presentation of the system RAM into a configurably-sized contiguous block below the 4GB limit and then making the rest available above the 4GB limit. If you're curious, the register (in intel parlance) is often called TOLUD (Top of Low Useable DRAM).

        Yes, furthermore, given modern OS designs on x86 architecture, a process cannot utilize more than 2gb (windows without /3gb boot option) or 3gb (linux, most BSDs, windows with /3gb and apps specially built to use the 3/1 instead of 2/2 split.)

        However, that limitation does not preclude you from having a machine running eight processes using 2GB of physical memory each.

        The processor feature is called PAE (Physical Address Extension). It works, basically, by adding an extra level of processor pagetable indirection.

        Incidentally, I have a quad P3-700 (It's a Dell PowerEdge 6450) propping a door open that could support 8GB of RAM if you had enough registered, ECC PC-133 SDRAM to populate the sixteen dimm slots.

        Anyways, here's a snippet from the beginning of a 32 bit machine running Linux which has 4GB of RAM:
        [ 0.000000] BIOS-provided physical RAM map:
        [ 0.000000] BIOS-e820: 0000000000000000 - 0000000000097c00 (usable)
        [ 0.000000] BIOS-e820: 0000000000097c00 - 00000000000a0000 (reserved)
        [ 0.000000] BIOS-e820: 00000000000e8000 - 0000000000100000 (reserved)
        [ 0.000000] BIOS-e820: 0000000000100000 - 00000000defafe00 (usable)
        [ 0.000000] BIOS-e820: 00000000defb1e00 - 00000000defb1ea0 (ACPI NVS)
        [ 0.000000] BIOS-e820: 00000000defb1ea0 - 00000000e0000000 (reserved)
        [ 0.000000] BIOS-e820: 00000000f4000000 - 00000000f8000000 (reserved)
        [ 0.000000] BIOS-e820: 00000000fec00000 - 00000000fed40000 (reserved)
        [ 0.000000] BIOS-e820: 00000000fed45000 - 0000000100000000 (reserved)
        [ 0.000000] BIOS-e820: 0000000100000000 - 000000011c000000 (usable)

        The title of that list should really be "Physical Address Space map." Either way, notice that the majority of the RAM is available up until 0xDEFAFE00 and the rest is available from 0x100000000 to 0x11c000000 - a range that's clearly above the 4GB limit.

        Yes, it's running a bigmem kernel... But that's what bigmem kernels are for.

        Oh, incidentally, even windows 2000 supported PAE. The bigger problem is the chipset. Not all of them support remapping a portion of RAM above 4GB.

  • The truth (Score:5, Insightful)

    by mcrbids ( 148650 ) on Monday March 30, 2009 @02:18AM (#27384873) Journal

    My first computer was a 80286 with 1 MB of RAM. That RAM was all parity memory. Cheaper than ECC, but still good enough to positively identify a genuine bit flip with great accuracy. My 80386SX had parity RAM, so did my 486DX4 120. I ran a computer shop for some years, so I went through at least a dozen machines ranging from the 386 era through the Pentium II era, at which point I sold the shop and settled on a AMDK62 450. And right about the time that the Pentium was giving way to the Pentium II, non-parity memory started to take hold.

    What protection did parity memory provide, anyway? Not much, really. It would detect with 99.99...? % accuracy when a memory bit had flipped, but provided no answer as to which one. The result was that if parity failed, you'd see a generic "MEMORY FAILURE" message and the system would instantly lock up.

    I saw this message perhaps three times - it didn't really help much. I had other problems, but when I've had problems with memory, it's usually been due to mismatched sticks, or sticks that are strangely incompatible with a specific motherboard, etc. none of which caused a parity error. So, if it matters, spend the money and get ECC RAM to eliminate the small risk of parity error. If it doesn't, don't bother, at least not now.

    Note: having more memory increases your error rate assuming a constant rate of error (per megabyte) in the memory. However, if the error rate drops as technology advances, adding more memory does not necessarily result in a higher system error rate. And based on what I've seen, this most definitely seems to be the case.

    Remember this blog article about the end of RAID 5 in 2009? [zdnet.com] Come on... are you really going to think that Western Digital is going to be OK with near 100% failure of their drives in a RAID 5 array? They'll do whatever it takes to keep it working because they have to - if the error rate became anywhere near that high, their good name would be trashed because some other company (Seagate, Hitachi, etc) would do the research and pwn3rz the marketplace.

    • Re:The truth (Score:5, Interesting)

      by Mr Z ( 6791 ) on Monday March 30, 2009 @02:49AM (#27385013) Homepage Journal

      Note: having more memory increases your error rate assuming a constant rate of error (per megabyte) in the memory. However, if the error rate drops as technology advances, adding more memory does not necessarily result in a higher system error rate. And based on what I've seen, this most definitely seems to be the case.

      Actually, error rates per bit are increasing, because bits are getting smaller and fewer electrons are holding the value for your bit. An alpha particle whizzing through your RAM will take out several bits if it hits the memory array at the right angle. Previously, the bits were so large that there was a good chance the bit wouldn't flip. Now they're small enough that multiple bits might flip.

      This is why I run my systems with ECC memory and background scrubbing enabled. Scrubbing is where the system actively picks up lines and proactively fixes bit-flips as a background activity. I've actually had a bitflip translate into persistent corruption on the hard drive. I don't want that again.

      FWIW, I work in the embedded space architecting chips with large amounts of on-chip RAM. These chips go into various infrastructure pieces, such as cell phone towers. These days we can't sell such a part without ECC, and customers are always wanting more. We actually characterize our chip's RAM's bit-flip behavior by actively trying to cause bit-flips in a radiation-filled environment. Serious business.

      Now, other errors that parity/ECC used to catch, such as signal integrity issues from mismatched components or devices pushed beyond their margins... Yeah, I can see improved technology helping that.

  • RAID(?) for RAM (Score:5, Interesting)

    by Xyde ( 415798 ) <slashdot@purrrrTIGER.net minus cat> on Monday March 30, 2009 @02:19AM (#27384875)

    With memory becoming so plentiful these days (I haven't seen many home PC's with 6 or 8GB granted, but we're getting there) it seems that a single error on a large capacity chip is getting more and more trivial. Isn't it a waste to throw away a whole DIMM? Why isn't it possible to "remap" this known-bad address, or allocate some amount of RAM for parity the way software like PAR2 works? Hard drive manufacturers already remap bad blocks on new drives. Also it seems to me that, being a solid state device, small failures in RAM aren't necessarily indicative of a failing component like bad sectors on a hard drive are. Am I missing something really obvious here or is it really just easier/cheaper to throw it away?

    • Re: (Score:3, Interesting)

      by Rufus211 ( 221883 )

      You just described ECC scrubbing [wikipedia.org] and Chipkill [wikipedia.org]. The technology's been around for a while, but it costs >$0 to implement so most people don't bother. As with most RAS [wikipedia.org] features most people don't know anything about it, so would rather pay $50 less than have a strange feature that could end up saving them hours of downtime. At the same time if you actually know what these features are and you need them, you're probably going to be willing to shell out the money to pay for them.

  • Joking aside... (Score:5, Informative)

    by BabaChazz ( 917957 ) on Monday March 30, 2009 @02:23AM (#27384887)

    First, it was not cosmic rays; memory was tested in a lead vault and showed the same error rate. Turns out to have been alpha particles emitted by the epoxy / ceramic that the memory chips were encapsulated in.

    That said: Quite clearly given your experience, Vista and Ubuntu load the memory subsystem quite differently. It is possible that Vista, with its all-over-the-map program flow, is missing cache a lot more often and so is hitting DRAM harder; I don't have the background to really know. I believe that Memtest86, in order to put the most strain on memory and thus test it in the most pessimal conditions, tries to access memory in patterns that equally hit physical memory hardest. But, what I have found is that some OSs, apparently including Ubuntu, will run on memory that is marginal, memory that Memtest86 picks up as bad.

    As for ECC in memory... The problem is that ECC carries a heavy performance hit on write. If you only want to write 1 byte, you still have to read in the whole QWord, change the byte, and write it back to get the ECC to recalculate correctly. It is because of that performance hit that ECC was deprecated. The problem goes away to a large extent if your cache is write-back rather than write-through; though there will be still a significant number of cases where you have to write a set of bytes that has not yet been read into cache and does not comprise a whole ECC word.

    That said, it is still used on servers...

    But I don't expect it will reappear on desktops any time soon. Apparently they have managed to control the alpha radiation to a great extent, and so the actual radiation-caused errors are now occurring at a much lower rate, significantly lower than software-induced BSODs.

    • Re: (Score:3, Informative)

      by Ron Bennett ( 14590 )

      It is possible that Vista, with its all-over-the-map program flow, is missing cache a lot more often and so is hitting DRAM harder...

      Perhaps that's another "feature" of Windows - no need for Memtest86 ... just leave Windows running for a few days with some applications running ... and if nothing crashes, the RAM is probably good.

    • Re:Joking aside... (Score:5, Insightful)

      by bertok ( 226922 ) on Monday March 30, 2009 @02:47AM (#27384999)

      As for ECC in memory... The problem is that ECC carries a heavy performance hit on write. If you only want to write 1 byte, you still have to read in the whole QWord, change the byte, and write it back to get the ECC to recalculate correctly. It is because of that performance hit that ECC was deprecated. The problem goes away to a large extent if your cache is write-back rather than write-through; though there will be still a significant number of cases where you have to write a set of bytes that has not yet been read into cache and does not comprise a whole ECC word.

      AFAIK, on modern computer systems all memory is always written in chunks larger than a byte. I seriously doubt there's any system out there that can perform single-bit writes either in the instruction set, or physically down the bus. ECC is most certainly not "depreciated" -- all standard server memory is always ECC, I've certainly never seen anything else in practice from any major vendor.

      The real issue is that ECC costs a little bit more than standard memory, including additional traces and logic in the motherboard and memory controller. The differential cost of the memory is some fixed percentage (it needs extra storage for the check bits), but the additional cost in the motherboard is some tiny fixed $ amount. Apparently for most desktop motherboard and memory controllers that few $ extra is far too much, so consumers don't really have a choice. Even if you want to pay the premium for ECC memory, you can't plug it into your desktop, because virtually none of them support it. This results in a situation where the "next step up" is a server class sytem, which is usually at least 2x the cost of the equivalent speed desktop part for reasons unrelated to the memory controller. Also, because no desktop manufacturers are buying ECC memory in bulk, it's a "rare" part, so instead of, say, 20% more expensive, it's 150% more expensive.

      I've asked around for ECC motherboards before, and the answer I got was: "ECC memory is too expensive for end-users, it's an 'enterprise' part, that's why we don't support it." - Of course, it's an expensive 'enterprise' part BECAUSE the desktop manufacturers don't support it. If they did, it'd be only 20% more expensive. This is the kind of circular marketing logic that makes my brain hurt.

  • Depends (Score:5, Interesting)

    by gweihir ( 88907 ) on Monday March 30, 2009 @02:24AM (#27384901)

    My experience with a server that recorded about 15TB of data is something like 6 bit-errors per year that could not be traced to any source. This was a server with ECC RAM, so the problem likely occured in busses, network cards, and the like, not in RAM.

    For non-ECC memory, I would strongly syggest running memtest86+ at least a day before using the system and if it gives you errors, replace the memory. I had one very persistend bit-error in a PC in a cluster, that actually reqired 2 days of memtest86+ to show up once, but did occure about once per hour for some computations. I also had one other bit-error that memtest86+ did not find, but the Linux commandline memory tester found after about 12 hours.

    The problem here is that different testing/usage patterns result in different occurence probability for weak bits, i.e. bits that only sometimes fail. Any failure in memtest86+ or any other RAM tester indicates a serious problem. The absence of errors in a RAM test does not indicate the memory is necessarily fine.

    That said, I do not believe memory errors have become more common on a per computer basis. RAM has become larger, but also more reliable. Of course, people participating in the stupidity called "overclocking" will see a lot more memory errors and other errors as well. But a well-designed system with quality hardware and a thourough initial test should typically not have memory issues.

    However there is "quality" hardware, that gets it wrong. My ASUS board sets the timing for 2 and 4 memory modules to the values for 1 module. This resulted in stable 1 and 2 module operation, but got flaky for 4 modules. Finally I moved to ECC memory before I figuerd out that I had to manually set the correct timings. (No BIOS upgrade available that fixed this...) This board has a "professional" in its name, but apparently, "professional" does not include use of generic (Kingston, no less) memory modules. Other people have memory issues with this board as well that they could not fix this way, seems that somethimes a design just is bad or even reputed manufacturers do not spend a lot of effort to fix issues in some cases.In can only advise you to do a thourough forum-search before buying a specific mainboard.

     

  • Then it would proba%ly alter not just one byte, b%t a chain of them. The cha%n of modified bytes would be stru%g out, in a regular patter%. Now if only there were so%e way to read memory in%a chain of bytes, as if it w%re a string, to visu%lize the cosmic ray mod%fication. hmmm...

  • Settings matter too (Score:5, Informative)

    by Max Romantschuk ( 132276 ) <max@romantschuk.fi> on Monday March 30, 2009 @02:29AM (#27384915) Homepage

    Not all memory is created equal. Memory can be bad if Memtest detects errors, or you can simply be running it at the wrong settings. Usually there are both "normal" and "performance" settings for memory on higher end motherboards, or sometimes you can tweak all sorts of cycle-level stuff manually (CAS latency etc.).

    Try running your memory with the most conservative settings before you assume it's bad.

  • by gQuigs ( 913879 ) on Monday March 30, 2009 @02:31AM (#27384925) Homepage

    Depending on where it fails (if it fails in a the same spot) you can relatively easily work around it and not throw out the remaining good portion of the stick. I wrote a howto..

    http://gquigs.blogspot.com/2009/01/bad-memory-howto.html [blogspot.com]

    I've been running on Option 3 for quite some time now. No, it's not as good as ECC, but it doesn't cost you anything.

  • Trust Memtest86 (Score:5, Informative)

    by nmos ( 25822 ) on Monday March 30, 2009 @02:46AM (#27384993)

    1) Do people trust a memtest86 error to mean a bad memory module or motherboard or CPU?

    Well, I'd add some other possibilities such as:

    Bad power supply,
    Memory isn't seated properly in it's socket.
    Incorrect timing set in bios.
    Memory is incompatable with your motherboard.
    etc..

    But yeah, if memtest86 says there's a problem then there really is something wrong.

  • by Nom du Keyboard ( 633989 ) on Monday March 30, 2009 @03:09AM (#27385095)
    Was it cosmic rays, or Alpha particle decay from impure materials that was going to do in our memory soon? IIRC it was the latter.
  • OK (Score:3, Insightful)

    by Runefox ( 905204 ) on Monday March 30, 2009 @03:17AM (#27385127)

    1) Do people trust a memtest86 error to mean a bad memory module or motherboard or CPU?

    Yes. I do, anyway; I've never had it report a false-positive, and it's always been one of the three (and even if it was cosmic rays, it wouldn't consistently come up bad, then, would it?). Then again, it could also mean that you could be using RAM requiring a higher voltage than what your motherboard is giving it. If it's brand-name RAM, you should look up the model number and see what voltage the RAM requires. Things like Crucial Ballistix and Corsair Dominator usually require around 2.1v.

    2) When I check my email on my desktop 16GB PC next year, should I be running ECC memory?

    Depends. If you're doing really important stuff then sure. ECC memory is quite a boon in that case. If you're just using your desktop for word processing and web browsing, it's a waste of money.

  • by Jay Tarbox ( 48535 ) on Monday March 30, 2009 @06:49AM (#27386091) Homepage Journal

    http://www.ida.liu.se/~abdmo/SNDFT/docs/ram-soft.html [ida.liu.se]

    This references an IBM study, which is what I think I actually remember but could not find quickly this morning.

    "In a study by IBM, it was noted that errors in cache memory were twice as common above an altitude of 2600 feet as at sea level. The soft error rate of cache memory above 2600 feet was five times the rate at sea level, and the soft error rate in Denver (5280 feet) was ten times the rate at sea level."

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...