Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Hybrid Seagate Hard Drive Has Performance Issues

kdawson posted more than 4 years ago | from the email-the-canadian-pharmacy dept.

Bug 67

EconolineCrush writes "The launch of Seagate's Momentus XT hard drive was discussed here last week, and for good reason. While not the first hybrid hard drive on the market, the XT is the only one that sheds the Windows ReadyDrive scheme for an OS-independent approach Seagate calls Adaptive Memory. While early coverage of the XT was largely positive, more detailed analysis reveals a number of performance issues, including poor sequential read throughput and an apparent problem with command queuing. In a number of tests, the XT is actually slower than Seagate's year-old Momentus 7200.4, a drive that costs $40 less."

cancel ×

67 comments

Sorry! There are no comments related to the filter you selected.

Well (0, Redundant)

bazald (886779) | more than 4 years ago | (#32428672)

That's disappointing.

Re:Well (5, Informative)

Peach Rings (1782482) | more than 4 years ago | (#32428720)

The drives are fine, it's just a firmware issue. They'll fix it in the next few months. It's not like people who bought the drives are screwed because of faulty equipment.

Re:Well (4, Insightful)

Aeternitas827 (1256210) | more than 4 years ago | (#32428754)

This is why I hesitate to be an early adopter of new technology. There's always real-world conditions that occur when a wider sample size comes available (i.e., the Release to Market) than can be reproduced in a lab during testing--and that's true of virtually ANY product. While the problems generally are fixable, it's a pain in the rear to deal with them in the interim. I'll let others be the guinea pigs, thank you very much.

Re:Well (0)

Anonymous Coward | more than 4 years ago | (#32429496)

the problem isn't lab settings can't account. it is that lab settings don't take the time to account. they rush the product to market before it has really been tested sufficiently.

Re:Well (1)

zerro (1820876) | more than 4 years ago | (#32430942)

Beta is the new Gold At least it seems to be the trend in the past decade or so. Unfortunately, Nobody like's the concept of "ship now, fix later", especially when it comes to things like Automobiles and Airplanes.

Re:Well (1)

drsmithy (35869) | more than 4 years ago | (#32431582)

Unfortunately, Nobody like's the concept of "ship now, fix later" [...]

Huh ? Aren't the OSS people constantly telling us it's the best release model evar ?

Re:Well (1)

sjames (1099) | more than 4 years ago | (#32433672)

Yes, but that's why most projects that do that have a stable and testing version. They also aren't generally selling the bleeding edge version.

Now, how many of the ship now fix later products come with complete specifications sufficient for a reasonable skilled end user to modify the firmware?

Re:Well (5, Interesting)

TOGSolid (1412915) | more than 4 years ago | (#32428772)

If there's one thing I've learned with Seagate, it's that they're terrible at fixing firmware issues. Their 500GB hard drives for laptops were notorious for having issues caused by crappy firmware that never got resolved by the time I trashed mine.

Re:Well (4, Interesting)

AK Marc (707885) | more than 4 years ago | (#32428888)

There was an issue with sound with drives around 2001 that they wouldn't fix. Then Dell said something to the effect of "they think our computers are crap, you fix it or we stop buying from you" and it was fixed. Anything smaller than that, and they would ignore it. I did update the firmware, and it made a huge difference in noise.

Re:Well (3, Interesting)

FreakyGreenLeaky (1536953) | more than 4 years ago | (#32429146)

Sadly, this seems to be the case with quality as well.

We buy batches, and my experience has shown a minimum of 10-15% of the drives (seagate) will be defective in some way.

They used to be so damn reliable.

Re:Well (0)

Anonymous Coward | more than 4 years ago | (#32429548)

Seagate has been crap for awhile now.

There used to be conner, seagate, and maxtor.

maxtor made 'good' cheap drives.

conner made 'meh' cheap drives

seagate made more expensive 'good drives'. but stopped doing that and started buying other drive companys.

Now all we have left is seagate. And no 'good' drives at any price available there.

Pick another company. You'll save alot of time that you currently use replacing dead seagates.

(my current fav is hitachi. So long as they dont start making deathstar drives again :P)

Re:Well (2, Informative)

drinkypoo (153816) | more than 4 years ago | (#32429598)

Seagate has been crap for awhile now.

You mean like "forever"?

There used to be conner, seagate, and maxtor.
maxtor made 'good' cheap drives.
conner made 'meh' cheap drives
seagate made more expensive 'good drives'. but stopped doing that and started buying other drive companys.

I agree with you about Maxtor, but Conner made SHIT cheap drives. Nothing from Conner Peripherals was ever worth buying. Unfortunately, CP drives were OEM'd due to cost for some time. Seagate has NEVER made "good" drives except for enterprise-class stuff. I grew up in Santa Cruz and so I had a ready supply of used Seagate disks. We used to call them Seizegate because the drives would succumb to stiction constantly. Not a month went by that I didn't have to pull a disk and whack it with a screwdriver to get my system to boot. Once I actually had to pull a cover and manually rotate a spindle it was stuck so hard, and it took significant effort to get it to turn. The same disk burned a trace right off its PCB after that, then I soldered a jumper wire, then it burned off the jumper wire, never hurt anything else in the process but back then computers still contained a bunch of TTL.

(my current fav is hitachi. So long as they dont start making deathstar drives again :P)

That makes me kind of fear them. I just buy WD. So far I've had the best luck with Maxtor and WD over my career.

Re:Well (1)

morgan_greywolf (835522) | more than 4 years ago | (#32429886)

Actually, some of the best, most reliable drives are Seagate drives. Well, remanufactured Seagate drives, anyway. According to a friend of mine who used to work for Seagate, the "factory recertified" drives are rebuilt in a Seagate plant in Mexico, and they come from that facility better than new.

Western Digital drives, in my experience, are very, very cheap.

If you want the best reliability, by SCSI or SAS drives rather than ATA or SATA. These are enterprise-grade equipment and are far more reliable.

Re:Well (2, Funny)

sparrowhead (1795632) | more than 4 years ago | (#32432134)

Worst case used to be having a WD and a Seagate drive mounted close to each other. The WD killed the Seagate slowly with it's vibrations while the Seagate fought back with heat

Re:Well (1)

aztracker1 (702135) | more than 4 years ago | (#32432888)

Funny, I always bought their enterprise level drives, which were always a little slower, but rock solid. Lately, I've gone with WD for most of my HDD purchases. I've gone with the "Green" drives when I've wanted more reliability (like my NAS box). I've become a pretty big fan of Intel SSDs though, cannot believe the difference, but the space in my laptop that only holds one HDD is pretty limited, as more than a 160GB SSD is insanely pricey. Kinda sucks the way SSD memory works, with pricing being pretty linear. Nothing like HDD pricing has been.

Re:Well (1)

nametaken (610866) | more than 4 years ago | (#32436252)

Yeah but given your experience purchasing drives in quantity I'm sure you've noticed that while these rates change pretty wildly between manufacturers, more importantly they vary quite a bit between models and revisions. Google's drive failure report (now a bit dated, but I'd guess still relevant) was pretty informative:

http://www.engadget.com/2007/02/18/massive-google-hard-drive-survey-turns-up-very-interesting-thing/ [engadget.com]

Re:Well (1)

FreakyGreenLeaky (1536953) | more than 4 years ago | (#32441902)

Right you are. I've had a truck load of 1TB's fail, yet the old 80GB/160GB drives just refuse to die.

Re:Well (0)

Anonymous Coward | more than 4 years ago | (#32429342)

Yeah, Seagate has scammed it last dollar out of me as well. Just put to rest another 1.5 TB SATA drive, it lasted a whole 6 months after the first RMA. Garbage products.

Re:Well (0)

Anonymous Coward | more than 4 years ago | (#32429596)

If there's one thing I've learned with Seagate, it's that they're terrible at fixing firmware issues.

As was also evidenced by the bricked 1 TB drives.

http://hardware.slashdot.org/story/09/01/17/0115207/Seagate-Hard-Drive-Fiasco-Grows

Re:Well (3, Insightful)

Anonymous Coward | more than 4 years ago | (#32428802)

"Just firmware" - don't we remember the fiasco from last year... and their inability to handle it properly.

There's nothing to "fix" (2, Interesting)

Joce640k (829181) | more than 4 years ago | (#32429608)

The SSD is a cache, caches don't do "sequential read"

e.g. Let's read the whole of RAM sequentially see how well your CPU cache performs. Oh, noes! We found a "performance problem"!!!

If all you do switch on, read email, switch off, you'll see a massive boost the next time you do it. Still, better not risk having that because there's an article somewhere on the Internet!

Re:There's nothing to "fix" (1)

TheSunborn (68004) | more than 4 years ago | (#32431998)

Quote:
e.g. Let's read the whole of RAM sequentially see how well your CPU cache performs.

That would be the absolutely best case usage for a cpu cache. Becasue when you read the first 32 bits, the cache will read the following 128/256 bits(Depending on cache line size/cpu/magic), which will then be ready to serve directly from cache. Much faster.

I saw a test where they added all the numbers in an 128MB array. They made 2 versions, one that iterated throu the array first to last, and one that did run last to first. And the one running last to first(That is: Running backwards in memory) were much(A factor 2 or 3 i think) slower.

Or maybe i * detecter is just broken.
 

Re:There's nothing to "fix" (1)

Joce640k (829181) | more than 4 years ago | (#32432424)

The cache only served up each piece of data once! After that it was discarded...what a terrible waste!

Re:There's nothing to "fix" (0)

Anonymous Coward | more than 4 years ago | (#32441042)

e.g. Let's read the whole of RAM sequentially see how well your CPU cache performs.

That would be the absolutely best case usage for a cpu cache

Errr, no. The best case for the CPU cache is where you're continually accessing data that is already in the cache. The cache can then serve data at its maximum bandwidth.

In your example the cache does nothing to improve your memory bandwidth -- those extra "prefetched" bits in the line still had to come from memory. Granted, it's not as bad as the case where you step through the array with stride equal to line size. In that case, we're actually wasting 7/8 of the bandwidth (because the CPU is requesting lines instead of words), so we're actually worse off than having no cache.

Re:There's nothing to "fix" (1)

sjames (1099) | more than 4 years ago | (#32433826)

One would expect that a pathological access pattern for the cache would fall back to performing no better than the drive without the new cache. However, in this case the XT performs significantly worse than the Momentus 7200.

Re:Well (1)

Jarik_Tentsu (1065748) | more than 4 years ago | (#32429906)

The 7200.11 drives had firmware issues. They'd stutter and just have bad perfomance with the SD-series firmware. Eventually they released a patch and I think that fixed it up.

Then new models came out with the CC-series firmware (Still 7200.11, not 7200.12). These models just died - the 'click' of death. Seems like a hardware issue, despite supposedly being only a firmware change.

In any case, this is happening way too often. Seagate used to be reliable...

Re:Well (1)

sjames (1099) | more than 4 years ago | (#32433494)

This sell it now fix it with a firmware update later trend is getting well beyond crazy. Perhaps they shouldn't sell it until after they come up with that magic firmware in the next few months.

Re:Well (4, Funny)

dimethylxanthine (946092) | more than 4 years ago | (#32428760)

Quite a common occurrence with hybrids, actually. There are unique difficulties when cross breeding heteroploid organisms, which manifest in.... oh wait.

expected behaviour (3, Interesting)

MonoSynth (323007) | more than 4 years ago | (#32428750)

poor sequential read throughput

That's the expected behaviour of this disk. Extremely fast for common tasks (booting and loading apps) and slower for less common and less performance-critical tasks. If you really need the SSD-like performance for all your tasks, buy a 500GB+ SSD, if you have the money for it.

In a number of tests, the XT is actually slower than Seagate's year-old Momentus 7200.4, a drive that costs $40 less.

That's because it's probably a $40 cheaper disk with an $80 SSD attached to it.

Re:expected behaviour (2, Insightful)

twisteddk (201366) | more than 4 years ago | (#32428894)

While I dont share your views on the technology. I do agree that this is expected behavior from a hybrid drive. I have yet to see a hybrid drive that actually does perform significantly better than a normal drive. And that just isn't happening yet.

I'm uncertain if this is because of a poor design, bad queueing, or other issues. But the very BEST hybrid I've seen performs only a couple of percent better than a normal drive, and then not even across the board, but only in specific tests.

Hybrid drives have a long way to go before they become my first choice. But at least you now have an entry level pricing on some of them, which is more than I can say for the full SSDs.

Re:expected behaviour (2, Interesting)

MonoSynth (323007) | more than 4 years ago | (#32429006)

Hybrid drives aren't made to be first choice. They're made to be an affordable choice. If you want to assemble an affordable but fast PC nowadays, you'll probably end up with a 40GB SSD for OS+Apps with a cheap, silent and big hard disk for storage. The problem with this approach is the barrier at 40GB. What if your SSD needs more space? What if it turns out that some frequently-used data is on the hard disk? Or that 60% of the OS files are hardly used? Hybrid drives try to decide for themselves which data should be optimized.

But I'm not really sure that they're optimizing at the right level. Maybe they should expose themselves to the operating system as two separate partitions and let the filesystem implement the optimization while showing up as one single volume to the end-user.

Re:expected behaviour (1)

hey (83763) | more than 4 years ago | (#32430552)

Maybe both... the 40GB SSD *and* a hybrid drive is the answer.

Re:expected behaviour (1)

jgrahn (181062) | more than 4 years ago | (#32437222)

If you want to assemble an affordable but fast PC nowadays, you'll probably end up with a 40GB SSD for OS+Apps with a cheap, silent and big hard disk for storage. The problem with this approach is the barrier at 40GB. What if your SSD needs more space? What if it turns out that some frequently-used data is on the hard disk? Or that 60% of the OS files are hardly used? Hybrid drives try to decide for themselves which data should be optimized.

I fail to see why the OS and "apps" should be the things that go onto the SSD. Surely that's stuff that's read once a day or so, and then stays in RAM? I bet it's rare for a machine that waits for disk I/O to wait for *such* things, compared to pure user data (including whatever data a mail/web/etc server handles).

Re:expected behaviour (0)

Anonymous Coward | more than 4 years ago | (#32437494)

Hitachi is already gearing up to produce an optical-SSD combo for fit notebook packaging:
http://hardware.slashdot.org/story/10/06/01/1018222/Hitachi-LG-Debuts-HyDrive-Optical-Drive-With-SSD?art_pos=42

It doesn't really metter what drive you "hybridize" with if you're just doing it as a packaging exercise -- you're still getting 3 devices in 2 bays.

Pair this with a big HD and an OS tuned to do intelligent caching and you're all set. You can also use the SSD for caching the optical too, where appropriate.

Re:expected behaviour (1)

icegreentea (974342) | more than 4 years ago | (#32429976)

The XT was pretty much made for laptops. It's really the only place where getting a true hybrid (as opposed to HDD + SSD) really makes sense.

For the XT, the SSD works as a read cache, and read cache only. You're only going to be seeing performance increases on whatever it has cached (4gigs). So if you have a few frequently used programs with long startup times, you'll see more than 'a couple of percent' better. And that's about it.

Hybrid drives will always be the compromise between HDD and SSD. You will never see them performing on the same order of performance as SSD across the board unless the SSD component becomes huge.

That's life on the Bleeding Edge (3, Insightful)

toygeek (473120) | more than 4 years ago | (#32428766)

Does anyone not remember the growing pains of previous technologies? Its not like this has never happened before. $Vendor releases $Product that does not meet $Expectations, charges a premium for it, and then fixes it later. Intel put out a whole slew of processors that couldn't even do proper math!

So, if you're going to live life on the edge of the newest technology, this kind of thing is to be expected. Anybody with higher expectations should stick to last years technology and get the best of *that* instead of the newest $uberware to come out.

Re:That's life on the Bleeding Edge (2, Insightful)

Aeternitas827 (1256210) | more than 4 years ago | (#32428824)

Anybody with higher expectations should stick to last years technology and get the best of *that* instead of the newest $uberware to come out.

I take that to an extreme; the PC I'm using now is about 5 years old, has no real scalability at this point, but it still works great (especially when I got rid of the Windows user who was using it, and swapped it to Ubuntu) for my purposes. Yeah, it'd be nice to have something nice and shiny and new, but it's not worth the goddam headache.

Re:That's life on the Bleeding Edge (1)

arndawg (1468629) | more than 4 years ago | (#32429058)

Wow. That really is EXTREME!

Re:That's life on the Bleeding Edge (1)

Aeternitas827 (1256210) | more than 4 years ago | (#32429128)

More like she needed a better computer that could handle her haphazard exploits on the internet more than I needed a new shiny toy. I can wait a year; if I'd forced her to wait a year, there would be smouldering ashes on the floor by now, accompanied by a keyboard missing several keys. I can only maintain a semi-stable XP machine for so long, at this point I consider a high-risk 2.5 years an accomplishment.

Re:That's life on the Bleeding Edge (1)

hairyfeet (841228) | more than 4 years ago | (#32429266)

Put her on Windows 7. My dad could kill XP like nobody's business, I swear I'd get called out to fix his computer every other week. Put him on Windows 7 at both his home and office and now the only time I get called out is when he gets some new gadget he wants me to hook up. If you have someone that kills XP trust me, that upgrade disc to W7 HP will be the best $100 you ever spent. Just add Comodo Time Machine and Comodo AV (both free) and short of a HDD failure you'll be good to go.

As for TFA, after the whole mess with Seagate firmware last year I'd avoid any new tech from them like the plague. I've seen far too many dead or dying Seagate drives in the past couple of years, their QA has really gone downhill. I'll be sticking with WD until they get the bugs knocked out and their QA back up, thanks anyway.

Re:That's life on the Bleeding Edge (2, Insightful)

toygeek (473120) | more than 4 years ago | (#32429544)

I understand that completely. But, when I built my new PC this year, I bought the best of 1-2 year old technology. I'm not a gamer so I didn't need the best of the best, just something fast. Its stable, has no issues, and just works the way it is supposed to.

Re:That's life on the Bleeding Edge (0)

Anonymous Coward | more than 4 years ago | (#32433692)

That's because CPU speeds have only improved 30-50% since 2003 or so.
Graphics has improved more, but just as multicore, not many apps really benefit
from that increase, because it was fast enough before.

Re:That's life on the Bleeding Edge (1, Insightful)

Anonymous Coward | more than 4 years ago | (#32428836)

Absolutely. For a few years when somewhat-affordable consumer SSDs were entering the market, many of them were total shit, and even the good ones were having firmware upgrades released for them.

Re:That's life on the Bleeding Edge (1)

compro01 (777531) | more than 4 years ago | (#32429086)

I think I will wait until the ATA-8 spec is released with a standardized version of this.

OS and file system agnostic? (5, Interesting)

Anonymous Coward | more than 4 years ago | (#32428780)

The caching and everything is all happening at a level below the OS and the file system, but these tests seem to have all been run in Windows 7 Ultimate x64, whatever that is.

Would another file system (ext4, for example) on Linux/*BSD or HFS+ on Mac OS yield different results, I wonder, w/and w/o swap? Can there be clashing optimization techniques here?

Re:OS and file system agnostic? (1)

drinkypoo (153816) | more than 4 years ago | (#32429660)

Can there be clashing optimization techniques here?

No, because the disk-based disk cache in Vista depends on having access to the flash volume, and this drive doesn't provide that, therefore it is not being used.

Re:OS and file system agnostic? (1)

deroby (568773) | more than 4 years ago | (#32429914)

The effects should be (more or less) identical across OS's as they mostly use the HDD's in the same way. Sure, there will be differences in the way the filesystem puts things to disk, but in the end it's always a bunch of data-blocks that are laid down in a structured way so they can be retrieved easily afterwards. (exceptions exist like eg. Log-structured file system, but in 99.999% of cases the general idea always is the same).

What I do wonder is, how does this thing feel 'in reality' ? Most of these tests are bases around (extremely) synthetic tests that don't even come close to actual usage, let alone allow this kind of hardware combination show it's potential.... (IMHO).

For comparison,
I'm currently running Win7 on an old/slowish/7200 rpm 2"5 HDD and out of the box it performs /annoyingly/ /bad/ : booting is slow, launching programs takes forever, once ram is filled up and I switch between programs or dare start something new the machine seems to come to a stop and the hdd light remains on for ever. So I added 2 USB sticks and let ReadyBoost play with 2GB on each and things became... well, not much different IHMO. I then went for eBoostr using the same 2 x 2GB setup and now things are/feel much 'faster'. Booting is kind of better (although I have a feeling the eBoostr driver isn't quite the first thing to be loaded and/or doesn't quite realise there is stuff to find on the USB's right of the bat, not sure if anyone knows of an easy way to optimize that ?), large, frequently used programs like eg. Outlook start up in seconds (!), VS2008 takes about 20 seconds to open up a (smallish) project (it used to be over a minute!), Firefox loads much faster than before, clicking a fonts-dropdown (eg. in excel) doesn't cause my machine to go non-responsive any more while scanning/fetching al the .ttf files... etc etc...

Stuff it all had to 'learn', but obviously it now 'knows' what's important and helps performance accordingly, big time!

I have high hopes that a cache sitting directly on the HDD may have the same impact over time, if not better... it would in each case free up some RAM and overhead on the computer side, although at the cost of configurability. Then again, if the algorithm is clever enough there is no need for configuration is there...

Then again, I wonder why they can't simply use a large slab of (cheap?!) DRAM and have its content buffered on the outer layers of the platters. This would mean that when spinning up, the drive needs to read its cache into memory first before being able to serve data to the machine; but given the read-speeds on this part of the drive - and the fact you don't have to worry about SATA or whatever interface overhead - reading say 8Gb of cache into DRAM shouldn't take much longer than the POST of the computer over-all, I think. Then again, as the cache is optimized over time, writing those changes back to the disk might be 'complicated' as it must be done while the drive seems idle and most certainly not while it might interfere with whatever the user (or OS) is doing.... doable though.

Why Can't It Just Act As Write-Back Cache? (1)

scharman (308566) | more than 4 years ago | (#32429152)

With hard drive access times in the very low milliseconds, it has me baffled why a fully associative cache can't be implemented with write-back.

This strikes me as pretty much the ideal solution. Surely the hardware is fast enough these days to support such a system?

Yes I know the cache hit search becomes the bottleneck, but we're talking hundreds of microseconds here! Use volatile memory for the LRU indexes / search and it would be damn quick for hits. Ensure that the sector tag is still kept for each line (sector) in the flash and on reboot the volatile memory rebuilds its coherency.

Re:Why Can't It Just Act As Write-Back Cache? (1)

Gekke Eekhoorn (27027) | more than 4 years ago | (#32429226)

Three reasons:
- RAM is expensive
- The OS can do it better than the disk (except at boot time)
- Doing it right is not trivial (complicated firmware is a bad thing)

If you want a disk cache with write-back, buy more memory for your system, that's what the OS does with it.

Re:Why Can't It Just Act As Write-Back Cache? (4, Insightful)

scharman (308566) | more than 4 years ago | (#32429434)

(a.) volatile memory is cheap for the amount needed for only the cache search (all it has to store is maybe 16 bytes per sector which is tiny). The ram cache is a trivial amount of the cost compared to the flash memory which is where your sectors are being stored.
(b.) re-read what I've listed above - I'm not suggesting you remove the OS tier of disk caching.
(c.) a fully associative algorithm is trivial in complexity in contrast to their 'adaptive' algorithms. A CS101 undergrad could implement a reasonable implementation in a hour. This is trivial stuff.

The OS is awful at write-back as if the power fails you've lost state. The benefit of a hybrid drive is that the flash is non-volatile. Writing to the flash ram is cheap. Writing to the disk is expensive. You get the best of both worlds with a flash based write-back cache.

The benefit of flash is it's cheaper than RAM so you can have more of it whilst being far faster than mechanical. Having a 32 or 64 GB flash hybrid drive provides sufficient cache to only rarely need to write back to the disk for most user operations whilst not forcing a 'system' and 'data drive'. As far as the system is concerned, it's just presented as one very fast 2 TB drive (or whatever).

The only time the system will slow down is when you begin to strip the cache which is perfectly reasonable as it means you've exhausted the flash capacity. For 99.999% of usage situations, this will never occur and it will feel just like a very very quick 2 TB flash drive.

Re:Why Can't It Just Act As Write-Back Cache? (4, Funny)

Glonoinha (587375) | more than 4 years ago | (#32429550)

I'm curious - what sort of algorithm would you use that can effectively store the data needed for a cache search that can represent 4096 byte sectors in 16 bytes.

As for the 'only time the system will slow down is when you have out-read the cache' - that's exactly the scenario that the OP is describing - massive serial reads on files that are larger than the cache. IIRC the cache was sized on the order of a few Megabytes, and every multimedia file I read all day / all night (music files, video files, gave vobs, etc) is at least that large, most are much larger.

PS - A CS101 undergrad could implement a reasonable implementation in a hour.
Now that's funny. Most first semester CS 101 undergrad students I've met couldn't pour rocks out of a box if the instructions were printed on the underside of the box.

Re:Why Can't It Just Act As Write-Back Cache? (1)

TheLink (130905) | more than 4 years ago | (#32429778)

> Most first semester CS 101 undergrad students I've met couldn't pour rocks out of a box if the instructions were printed on the underside of the box.

Because rocks are heavy, and most CS101 undergrad students aren't very strong? :)

Re:Why Can't It Just Act As Write-Back Cache? (1)

Christian Smith (3497) | more than 4 years ago | (#32433772)

The OS is awful at write-back as if the power fails you've lost state. The benefit of a hybrid drive is that the flash is non-volatile. Writing to the flash ram is cheap. Writing to the disk is expensive. You get the best of both worlds with a flash based write-back cache.

Unfortunately, FLASH is quite slow at writing, especially with only a single channel to write to. While a drive like this would probably still beat a pure HD for write latency (and hence perceived performance), synthetic sequential write benchmarks would take a pounding the drive simply wouldn't sell.

As an example, consider the cheap Intel 40GB SSD. It has only 5 channels (half the channels and flash of the 80GB drive) and can only write 35MB/s sequentially. That works out at about 7MB/s per channel! Try selling a drive that can only write 7MB/s sequentially.

Of course, the drive could recognize sequential writes and bypass the FLASH cache for these, but that of course complicates the firmware further.

The OS cache shields you from most write latency anyway, and it is read latency that hinders most peoples perception of performance. Hence why read caching only is done.

Re:Why Can't It Just Act As Write-Back Cache? (1)

Gekke Eekhoorn (27027) | more than 4 years ago | (#32450094)

Wait, what? Oh I see - are you proposing to add a fully associative cache in front of the 4GB Flash memory to speed up cache lookups and thus lazily storing writes as well?

I thought you were caching the stored data in a cache. I must admit I kinda glossed over the "fully associative with write-back" bit :-)

I suppose that can work - SLC is great for caching writes on. However, it's a lot more work than simply copying hot reads onto the Flash and caching them there. What you're proposing means a lot of new work on the disk controller, whereas now they simply slapped a caching thing on top of what they had.

However, at http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/Memory/fully.html [umd.edu] they explain fully associative caches nicely and add that "The hardware for finding the right slot, then picking the slot if more than one choice is available is rather large, so fully associative caches are not used in practice".

I don't think it really matters how Seagate exactly decides to cache stuff - right now they do read-cache only and it would be nice if they did a write-cache as well. You can do that just fine without using fully associative caches for the addressing.

Doing caching right is just not a trivial thing, especially if you have to do it on a tiny embedded platform.

Re:Why Can't It Just Act As Write-Back Cache? (1)

soppsa (1797376) | more than 4 years ago | (#32468264)

A CS101 undergrad could implement a reasonable implementation in a hour. This is trivial stuff.

Me thinks you are not a computer scientist, or do not remember the average CS101 student...

Re:Why Can't It Just Act As Write-Back Cache? (1)

AHuxley (892839) | more than 4 years ago | (#32429652)

It also seems good firmware is expensive.
Sandforce and Intel seem to have skills.
Where did Seagate buy its from ;)

Re:Why Can't It Just Act As Write-Back Cache? (1)

deroby (568773) | more than 4 years ago | (#32430014)

The main problem with OS caches is that they need to be read into memory before they can be used.
=> it's all great if the OS keeps a copy of something.dll in memory because it has learned over time that this file is needed very often, it still has to read it at least once first.

Although the HDD will not realise what it is caching, it can have the relevant blocks already sitting in its cache long before the OS asks for it.

So yes, I agree, adding RAM on the HDD makes it more expensive, adding the cache-logic even more so, but offloading this to the hardware seems like it's worth the (small) extra price IMHO,
ESPECIALLY for everyday users as they will have the most benefit from it.... (servers and power-gamers that 'need' the performance will likely be willing to put down the money for an $$D disk)

Re:Why Can't It Just Act As Write-Back Cache? (1)

m.dillon (147925) | more than 4 years ago | (#32434844)

This doesn't work well in practice. About the only thing the HDD can actually cache is unrequested data that passes under the head while it is going after requested data. For example, it can cache data ahead of linear read requests, even if several programs are doing linear reads from different parts of the disk. This is what the HD's zone cache does. Usually around 16 'zones' can be tracked in this manner.

Unfortunately this is data which is already readily accessible, so once the HD caches enough to buffer the requests from the OS there is no point caching any more (performance will not improve any further). Adding more cache ram to the HD itself will have little effect once that point is passed.

It would be far, far better to spend the extra money on ram for the system and not ram for the HD. The hybrid SSD model for the HD also has serious problems... the HD has no way to determine what data should be preferentially cached whereas the OS does. So it is far better to attach a SSD directly to the OS as a separate entity and have the OS do the data/meta-data caching.

-Matt

Re:Why Can't It Just Act As Write-Back Cache? (1)

deroby (568773) | more than 4 years ago | (#32442966)

You assume here that the information in the disk-cache needs to be read from the disk. While this is true for "current average hardware" using diskread-ahead/read-behind caching as you describe, this is not true for the non-volatile cache used in the disks being tested here.

[... the HD has no way to determine what data should be preferentially cached whereas the OS does ...]

That's where the 'learning' comes into play, which is what makes this kind of drive special. While I agree that the drive does not know the meaning of what it is caching, it certainly can learn which blocks of data are accessed most frequently and hence can put these in it's *non volatile* cache, or as I was musing, in it's "preloaded" cache.
You could argue that adding 4Gb of RAM might be more useful and in fact might be used in many other different ways than just cache, it also might turn out much more expensive (upgrading laptop memory gets expensive fast), or simply impossible (32bit systems).

Re:Why Can't It Just Act As Write-Back Cache? (2, Informative)

hey (83763) | more than 4 years ago | (#32430596)

I'd love to buy more memory but I already have 4GB (the limit for many machines.)

Re:Why Can't It Just Act As Write-Back Cache? (1)

bored (40072) | more than 4 years ago | (#32439438)

Buy more memory, and put your swap cache on it. There are a number of ramdisks that use the memory above 4G on 32-bit M$ OS's. With a little google searching you can find a few that work well and are free. Sure it won't act as disk cache but it works fantastically for running applications that are willing to consume multiple GB of disk (aka a whole bunch of VMWARE sessions).

Or call M$ and bitch about them restricting the desktop users to 3G of memory, while the 32-bit server versions can access 64.

I really have no idea what the "problem" is (1)

Cowclops (630818) | more than 4 years ago | (#32432074)

In some particular benchmark it doesn't have as high sequential read speeds as you might expect, and yet these "mp3" and "video" read benchmarks probably don't require the maximum bandwidth allocated from the drive. It might be working EXACTLY as expected if its streaming MP3s from the flash media which may have a "slower, but fast enough for media streaming" sequential speed and its doing it so that the platter mechanism is free for anything else that might come up.

I don't rate the performance of this drive as "having issues" at all, even after reading the entire benchmark page. The hybrid nature of the drive seems like it would make it very hard to benchmark accurately - the real question is whether it feels SSD-like in normal operation or if it feels slower than a regular laptop drive from the same company. If its the latter - THATS a problem. I know theres no quantitative measurement of "does my computer feel faster" but it seems like the data they've presented is likely not representative of what you should expect from the drive. The actual large-file sequential speed seems to be at the top of the laptop hard drive list and the random reads are close to "true SSD" territory.

I'm guessing nothing needs to be fixed at all and its working exactly as intended... its just that one or two benchmarks seem to turn out lower numbers than you'd expect even if the overall performance is good.

Well.. (1)

SlashDev (627697) | more than 4 years ago | (#32432216)

... what do you expect? It's a Hybrid...

Charts are suspect (1)

Smallpond (221300) | more than 4 years ago | (#32432962)

Why do none of the symbols in the keys match the chart body? For example, Scorpio has a black triangle in the key, but the line on the chart has black diamonds. Is the chart software flaky or are the results being rigged?

Its not used as cache as in RAM is used for disk (0)

Anonymous Coward | more than 4 years ago | (#32435538)

This is what happens when people get too smart for their own good! They have tried to implement a cache of most frequently used data, which makes the 4GB flash useless for anything other than booting or loading of programs.

Instead if they treated it like RAM which lasts across reboots, they will get better bang for buck. 4GB is too little for that though. They needed at least something like 32GB. And then, they can buffer all IO going to actual disk in the flash and lazily write it to the rotating media. That would not only alleviate sequential b/w problems but will help with all kinds of copying. Random read/writes will be handled much better as well.

I would even go one step further and segregate boot/startup time cache into a 4GB area on flash and use the rest of 32GB flash as buffer for all transfers to the disk.

And I will not use SLC. I would use MLC. It has life span longer than the Momentus drive that it is supplementing.

Mine spins down for no reason (0)

Anonymous Coward | more than 4 years ago | (#32435572)

I bought one too based on all the press hype. One problem I've noticed, it powers down, even though I'm actively using the PC and have power down disabled in Power Management. As a result, HORRIBLE lag times for writes. The system hangs and I hear it spin up then finally write. I think mine needs to go back to Newegg I'm afraid.

Re:Mine spins down for no reason (0)

Anonymous Coward | more than 4 years ago | (#32441210)

Yikes, I hope that is faulty behavior and not common-case. I had a similar-sounding problem with a WD Caviar Green... I think it was faulty behavior, but I returned it and got a Black instead...

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>