Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Exploring Advanced Format Hard Drive Technology

ScuttleMonkey posted more than 4 years ago | from the lift-the-hood-once-in-a-while dept.

Data Storage 165

MojoKid writes "Hard drive capacities are sometimes broken down by the number of platters and the size of each. The first 1TB drives, for example, used five 200GB platters; current-generation 1TB drives use two 500GB platters. These values, however, only refer to the accessible storage capacity, not the total size of the platter itself. Invisible to the end-user, additional capacity is used to store positional information and for ECC. The latest Advanced Format hard drive technology changes a hard drive's sector size from 512 bytes to 4096 bytes. This allows the ECC data to be stored more efficiently. Advanced Format drives emulate a 512 byte sector size, to keep backwards compatibility intact, by mapping eight logical 512 byte sectors to a single physical sector. Unfortunately, this creates a problem for Windows XP users. The good news is, Western Digital has already solved the problem and HotHardware offers some insight into the technology and how it performs."

cancel ×

165 comments

You mean... (0)

Anonymous Coward | more than 4 years ago | (#31291174)

there is something more advanced than mkfs or format c:\ ?

Disk Alignment... Learn this! (2, Informative)

JustASlashDotGuy (905444) | more than 4 years ago | (#31293348)

This is especially important for all you who manage a SAN. Learn it, love it, live it.

To learn why disk partition alignment can be important, please reference the following blog post: http://clariionblogs.blogspot.com/2008/02/disk-alignment.html [blogspot.com]
Instructions for Stripe Alignment/Partition Alignment within a Windows Operating Systems
Reference the following link for info on DiskPart, http://support.microsoft.com/kb/300415 [microsoft.com]
1 - At a command prompt on a windows host type diskpart
2 -Type select disk X (X being the numbered disk within disk management that you want to align)
3 -Type create partition primary align=64
4 -You can then format the drive and assign a drive letter to it

Large sector size good? (1, Interesting)

ArcherB (796902) | more than 4 years ago | (#31291216)

I thought the point was to have a small sector size. With large sectors, say 4096K, a 1K file will actually take up the full 4096K. A 4097K file will take up 8194K. A thousand 1K files will end up taking up 4096000K. I understand that with larger HDD's that this becomes less of an issue, but unless you are dealing with a fewer number of large files, I don't see how this can be more efficient when the size of every file is rounded up to the next 4096K.

Re:Large sector size good? (1)

ArcherB (796902) | more than 4 years ago | (#31291274)

I thought the point was to have a small sector size. With large sectors, say 4096K, a 1K file will actually take up the full 4096K. A 4097K file will take up 8194K. A thousand 1K files will end up taking up 4096000K. I understand that with larger HDD's that this becomes less of an issue, but unless you are dealing with a fewer number of large files, I don't see how this can be more efficient when the size of every file is rounded up to the next 4096K.

OK, it's 4K (4096 bytes), not 4096K. I guess that's a bit more doable when we're talking about sizes greater than 1TB.

Re:Large sector size good? (1, Insightful)

Anonymous Coward | more than 4 years ago | (#31291286)

If you read the article carefully, the new size is only 4K, not 4096K. The 4K size actually matches very well with most common files ystems. The 4096K is an error in the article.

Re:Large sector size good? (1)

ArcherB (796902) | more than 4 years ago | (#31291320)

If you read the article carefully, the new size is only 4K, not 4096K. The 4K size actually matches very well with most common files ystems. The 4096K is an error in the article.

Here is a quote from the article:

Advanced Format changes a hard drive's sector size from 512 bytes (the standard for the past three decades) to 4096K

However, it seems correct in other places, like a graphic for example.

Re:Large sector size good? (4, Funny)

StikyPad (445176) | more than 4 years ago | (#31291386)

If you read the article carefully, the new size is only 4K, not 4096K. The 4K size actually matches very well with most common files ystems.

Looks like they're not the only ones who miscalculated their block boundary.

Re:Large sector size good? (1)

WrongSizeGlass (838941) | more than 4 years ago | (#31291628)

If you read the article carefully ...

Well, if you read the article very carefully you'll note that it lists the WD AF drive as 5400 RPM. If true then they'll really see some performance gains from a 7200 RPM version. If it's just another typo/mistake/ooopsy then we should tag this article as "needs editor".

Re:Large sector size good? (2, Insightful)

BitZtream (692029) | more than 4 years ago | (#31291322)

You want the sector size to be smaller than the average file size or you're going to waste a lot of space. If your average file size is large, and writes are sequential, you want the largest possible sector sizes.

Re:Large sector size good? (1, Funny)

noidentity (188756) | more than 4 years ago | (#31291426)

You want the sector size to be smaller than the average file size or you're going to waste a lot of space. If your average file size is large, and writes are sequential, you want the largest possible sector sizes.

Please take a few moments to notice the amount of redundancy above, and the lack of it in this rewritten version:

You want the sectors to be smaller than average files or you're going to waste a lot of space. If your average files are large, and writes are sequential, you want the largest possible sectors.

In other words, the adjective small implicitly refers to size, so you don't need to say small size, and the same for large.

Re:Large sector size good? (0)

Anonymous Coward | more than 4 years ago | (#31291502)

"Smaller than the average file size" == "Smaller than the average size of all files".

"Smaller than the average files" == "Smaller than the size of an average file".

Not exactly the same meaning. In fat, depending on the distribution of file sizes on your FS, the two numbers can be very different.

Re:Large sector size good? (1)

noidentity (188756) | more than 4 years ago | (#31292242)

By average file, I meant mean [wikipedia.org] , not median. If average must mean median [wikipedia.org] , then I guess I'd have to write

You want the sector size to be less than the mean file size or you're going to waste a lot of space. If your mean file size is great, and writes are sequential, you want the largest possible sectors.

My main point was that a size is a magnitude; it has no size (or weight or temperature) itself. When I read smaller size, I picture a size printed in a smaller font. If you mean smaller, just write smaller.

Re:Large sector size good? (1)

ghjm (8918) | more than 4 years ago | (#31291548)

You rewrote it down to null? Did every term cancel out?

Re:Large sector size good? (1)

Joce640k (829181) | more than 4 years ago | (#31291326)

Most file systems work by clusters, not sectors.

NTFS partitions use 4k clusters by default so you already have this problem.

Re:Large sector size good? (1)

peragrin (659227) | more than 4 years ago | (#31291472)

Indeed that is why they are do this at 4k. most current FS's use a 4k file as it's base cluster size. By updating the sector size to match that of the average cluster anyways, they litterally cut down the size of the required ECC by 8. You can take two drives of the same physical characteristics and by increasing the sector size to 4k you gain hundreds of megabytes on the average 100 gigabyte drive.

Re:Large sector size good? (5, Insightful)

forkazoo (138186) | more than 4 years ago | (#31291338)

I thought the point was to have a small sector size. With large sectors, say 4096K, a 1K file will actually take up the full 4096K. A 4097K file will take up 8194K. A thousand 1K files will end up taking up 4096000K. I understand that with larger HDD's that this becomes less of an issue, but unless you are dealing with a fewer number of large files, I don't see how this can be more efficient when the size of every file is rounded up to the next 4096K.

The filesystem's minimum allocation unit size doesn't necessarily need to have a strong relationship with the physical sector size. Some filesystems don't have the behavior of rounding up the consumed space for small files because they will store multiple small files inside a single allocation unit. (IIRC, Reiser is such an FS.)

Also, we are actually talking about 4 kilobyte sectors. TFS refers to it as 4096k, which would be a 4 megabyte sector. (Which is wildly wrong.) So, worst case for your example of a thousand 1k files is actually 4 megabytes, not 4 gigabytes as you suggest. And, really, if my 2 terabyte drive gets an extra 11% from the more efficient ECC with the 4k sectors, that gives me a free 220000 megabytes, which pretty adequately compensates for the 3 MB I theoretically lose in a worst case filesystem from your example thousand files.

Re:Large sector size good? (1)

tepples (727027) | more than 4 years ago | (#31291586)

Some filesystems don't have the behavior of rounding up the consumed space for small files because they will store multiple small files inside a single allocation unit. (IIRC, Reiser is such an FS.)

True, block suballocation [wikipedia.org] is a killer feature. But other than archive formats such as zip, are there any maintained file systems for Windows or Linux with this feature?

Re:Large sector size good? (1)

Avtuunaaja (1249076) | more than 4 years ago | (#31291772)

Btrfs. Modify-in-place filesystems work poorly with block suballocation -- reiserfs is a case study. On the other hand, Copy-on-Write filesystems lose nothing from it.

Re:Large sector size good? (3, Informative)

jabuzz (182671) | more than 4 years ago | (#31291782)

IBM's GPFS is one, though it ain't free it does support Linux and Windows both mounting the same file system at the same time. They reckon the optimum block size for the file system is 1MB. I am not convinced of that myself, but always give my GPFS file systems 1MB block sizes.

Then there is XFS that for small files will put the data in with the metadata to save space. However unless you have millions of files forget about it. With modern drive sizes the loss of space is not important. If you have millions of files stop using the file system as a database.

Re:Large sector size good? (1)

Korin43 (881732) | more than 4 years ago | (#31291838)

From the wikipedia page you linked to: Btrfs, ReiserFS, Reiser4, FreeBSD UFS2. All of these are actively maintained, and ReiserFS and UFS2 are stable (although UFS2 is BSD, not Linux).

OJFS (2, Funny)

tepples (727027) | more than 4 years ago | (#31292050)

From the wikipedia page you linked to: Btrfs

Thanks.

ReiserFS and UFS2 are stable

I was looking for file systems with killer features, not a killer maintainer ;-)

Re:Large sector size good? (1)

Hurricane78 (562437) | more than 4 years ago | (#31293624)

Also, we are actually talking about 4 kilobyte sectors. TFS refers to it as 4096k, which would be a 4 megabyte sector. (Which is wildly wrong.)

Wanna bet TFS was written by a Verizon employee? ;)

Re:Large sector size good? (0)

Anonymous Coward | more than 4 years ago | (#31291354)

We approve of your subject line's redundant repetition.

- Dept. of Redundancy Dept.

Re:Large sector size good? (1)

Cyberax (705495) | more than 4 years ago | (#31291366)

Unless you use a clever filesystem which doesn't force file size to be a multiple of sector size.

Re:Large sector size good? (1)

NFN_NLN (633283) | more than 4 years ago | (#31291372)

I thought the point was to have a small sector size. With large sectors, say 4096K, a 1K file will actually take up the full 4096K. A 4097K file will take up 8194K. A thousand 1K files will end up taking up 4096000K. I understand that with larger HDD's that this becomes less of an issue, but unless you are dealing with a fewer number of large files, I don't see how this can be more efficient when the size of every file is rounded up to the next 4096K.

You had me worried for a while there so I did a quick check. Turns out NONE of my movies or MP3's are less than 4096 bytes so it looks like I dodged a bullet there. However, when Hollywood perfects it's movie industry down to 512 different possible re-hashes of the same plot they might be able to store a movie with better space efficiency on a 512 byte/sector drive again.

Header files are a big one (1)

tepples (727027) | more than 4 years ago | (#31291614)

Turns out NONE of my movies or MP3's are less than 4096 bytes so it looks like I dodged a bullet there.

But how big are script files and source code files and PNG icons?

Re:Header files are a big one (1)

MichaelSmith (789609) | more than 4 years ago | (#31291692)

Extract from ls -l /etc

-rw-r--r-- 1 root root 10788 2009-07-31 23:55 login.defs
-rw-r--r-- 1 root root 599 2008-10-09 18:11 logrotate.conf
-rw-r--r-- 1 root root 3844 2009-10-09 01:36 lsb-base-logging.sh
-rw-r--r-- 1 root root 97 2009-10-20 10:44 lsb-release

Re:Large sector size good? (2, Interesting)

owlstead (636356) | more than 4 years ago | (#31291744)

You didn't dodge any bullet. Any file that has a size slightly over each 4096 border will take more space. For large amounts of larger files (such as an MP3 collection), you will, on average, have 2048 bytes of empty space in your drive's sectors. Lets say you have an archive which also uses some small files (e.g. playlists, small pictures) say that the overhead is about 3 KB per file, and the average file size is about 3MB. Since 3000000 / 3000 is about 1/1000 you could have a whopping 1 pro mille loss. That's for MP3's, for movies the percentage will be much lower still. Of course, if your FS uses a block size of 4096 already then you are already paying this 1 promille of overhead.

Personally I would not try and sue MS or WD over this issue...

Re:Large sector size good? (1)

owlstead (636356) | more than 4 years ago | (#31291816)

Sorry to reply on my own post here, FS block size should be minimum allocation size, which may be smaller than the physical sector size. So for your MP3 collection the overhead may be even lower...

Re:Large sector size good? (1)

jgtg32a (1173373) | more than 4 years ago | (#31291554)

It isn't that great for the OS's partition but it works out great for my Media partition

Re:Large sector size good? (2, Interesting)

Avtuunaaja (1249076) | more than 4 years ago | (#31291608)

You can fix this on the filesystem level by using packed files. For the actual disk, tracking 512-byte sectors when most operating systems actually always read them in groups of 8 is just insane. (If you wish to access files by mapping them to memory, and you do, you must do so at the granularity of the virtual memory page size. Which, on all architectures worth talking about, is 4K.)

Re:Large sector size good? (1)

kramulous (977841) | more than 4 years ago | (#31291812)

I see what you mean but will it be like other parts of the computer? I do computation on CPUs, GPUs or FPGAs depending on what hardware is appropriate for the work that needs to be done. Is this similar?

You have data with certain attributes and store it appropriately.

Re:Large sector size good? (1)

rickb928 (945187) | more than 4 years ago | (#31292086)

NetWare has been doing block suballocation for a while now [novell.com] . Not a bad way to make use of a larger block size, and it was crucial when early 'large' drives had to tolerate large blocks, at least before LBA was common. Novell tackled a lot of these problems fairly early as they lead the way in PC servers and had to deal with big volumes fairly quickly. Today, we take a lot of this for granted, and we are swimming in disk space so it's not a big deal. But once upon a time, this was not so. 80MB was priceless. Those sure were the days, grooming volumes and wincing over a 5MB file... ahh....

Not the first [slashdot.org] time block size was trumpeted as the next Insanely Great Thing.

I wish Windows fully implemented it. Hard to say, though, since the info is vague.

Re:Large sector size good? (1)

Darinbob (1142669) | more than 4 years ago | (#31292660)

You can have file systems that don't use up a full sector for small files. Or you do what the article mentioned and have 8 effective blocks within one physical block.

On the other hand, with your logic, 512 byte sectors are too big too, because I have lots of files that are much smaller than that...

640 terabytes (1, Funny)

Anonymous Coward | more than 4 years ago | (#31291226)

ought to be enough for anyone

Bill Gates... (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#31291266)

...ought to be man enough for any woman.

512x4=4MB?? (1, Informative)

Anonymous Coward | more than 4 years ago | (#31291232)

You mean 4096 bytes, not 4096k, right? Last time I checked, eight 512 byte sectors is considerably smaller than 4MB.

Re:512x4=4MB?? (0)

owlstead (636356) | more than 4 years ago | (#31291412)

It's a copy from the start of the article. Later in the pictures the 4096K become 4K again, which is also wrong, it's 4 KiB or just 4096 bytes. Especially with hard drives, it makes no sense to call them 4K blocks of data, when the total capacity is calculated in real kilobytes (oh, boy, am I gonna get flamed for this). How this all got past all the editors is beyond me.

Re:512x4=4MB?? (1)

cstdenis (1118589) | more than 4 years ago | (#31291602)

How this all got past all the editors is beyond me.

You must be new here.

Re:512x4=4MB?? (1)

ryantmer (1748734) | more than 4 years ago | (#31292324)

How this all got past all the editors is beyond me.

You must be new here.

Even funnier (more ironic?) when you observe the parent's and child's user ID :)

Re:512x4=4MB?? (1)

owlstead (636356) | more than 4 years ago | (#31293360)

I know they pull this off about every odd article, but it perpetually amazes me anyway :)

Re:512x4=4MB?? (5, Insightful)

MrMacman2u (831102) | more than 4 years ago | (#31291748)

"it's 4 KiB or just 4096 bytes."

No. Just no.

Never use the term 'KiB' for kiloBYTES ever again. Just don't do it. I don't CARE if it's "the new standard". Screw that, it's KB KiloBytes.

This "new" standard mandated by the IEC can eat me.

1024 bytes IS, and forever will be, 1 KiloByte (KB)
1000 bits IS, and forever will be, 1 KiloBit (Kb)

1999 and the IEC can DROP DEAD. I will never. EVER. Use the new """""""""""""standard"""""""""""".

That said, excellent job highlighting the dreadful editing, inaccuracies like that are so confusing to try and keep straight between what is written and what was MEANT. Thumps up for you!

Re:512x4=4MB?? (4, Informative)

the_one(2) (1117139) | more than 4 years ago | (#31292118)

It was never KB and never will be. kB perhaps but not KB.

Free disk space: 1.21 Giblets (4, Insightful)

Tetsujin (103070) | more than 4 years ago | (#31292332)

"it's 4 KiB or just 4096 bytes."

No. Just no.

Never use the term 'KiB' for kiloBYTES ever again.

"kiB" is for kibibytes, not kilobytes...

The introduction of those new units always kind of grated me, as it went against all the 20-odd years of experience I'd had with computers up to that point. But, I have to say, "kilobytes" and "megabytes" and "gigabytes" had always been ambiguously defined. Usually RAM would use the power-of-two definitions and disks would use the power-of-ten definitions... As someone who appreciates precise language, I think this effort to disambiguate the terminology is a good thing, even if it goes against what I learned about computers as a kid. I don't think making the opposite change (i.e. keeping "kilobyte" = 1024 bytes and making a new term for 1000 bytes) would have made any sense at all - the "kilo" in "kilobyte" goes against the normal definition of "kilo". I think it was always kind of sleazy that hard drive manufacturers could tell you they were giving you a megabyte of storage and it would be less than what the computer considers a "megabyte" - but the prefix has a definition that predates its use in computing, and from that perspective I think that usage, while problematic and misleading, was legitimate.

Re:Free disk space: 1.21 Giblets (1)

jedidiah (1196) | more than 4 years ago | (#31292906)

Those new prefixes are just lame. It's like they are trying to punish computer users or something
for offending their sense of beaurocratic order for so long. A much more logical approach would be
to specify whether or not the prefix is meant to be base-10 or base-2.

Re:Free disk space: 1.21 Giblets (0)

Anonymous Coward | more than 4 years ago | (#31293116)

"kilobytes" and "megabytes" and "gigabytes" had always been ambiguously defined.

And they still are. The original definitions can still mean powers of either 1000 or 1024.

Re:Free disk space: 1.21 Giblets (1)

Anpheus (908711) | more than 4 years ago | (#31293442)

<fallacy>And gosh darnit, the best way to remedy the ambiguity would be to refuse to confront it.</fallacy>

Re:512x4=4MB?? (1)

owlstead (636356) | more than 4 years ago | (#31293342)

I'll leave the discussion about KiB, it was a side issue, although the article does IMHO make it painfully clear why it's needed.

"That said, excellent job highlighting the dreadful editing, inaccuracies like that are so confusing to try and keep straight between what is written and what was MEANT. Thumps up for you!"

Thanks, have been moderated into oblivion anyway :P

Re:512x4=4MB?? (1)

bigstrat2003 (1058574) | more than 4 years ago | (#31293380)

I need to bookmark this post for the epic truth it contains. You said it better than I ever could, thank you!

Re:512x4=4MB?? (1)

Hurricane78 (562437) | more than 4 years ago | (#31293694)

Actually it’s kB and kb. With a small k. Since K already stands for kelvin. And what is a kelvin byte? ;)

That's a little big... (0)

Anonymous Coward | more than 4 years ago | (#31291236)

> The latest Advanced Format hard drive technology changes a hard drive's sector size from 512 bytes to 4096K.

I think not... try 4096 bytes.

Defrag (1)

turin39789 (921870) | more than 4 years ago | (#31291256)

Wait! Will this shorten or lengthen defrag times. Do file sizes under 4096K still exist?

Re:Defrag (1, Insightful)

Anonymous Coward | more than 4 years ago | (#31291316)

Use a filesystem that only fragments when there is not enough contiguous free space to write a given file and you won't need to defrag.

Re:Defrag (1)

Tubal-Cain (1289912) | more than 4 years ago | (#31291522)

You assume that the hard drive is empty enough to have sufficient contiguous free space.

Re:Defrag (1)

tepples (727027) | more than 4 years ago | (#31291636)

Ideally, a file system would start to defragment itself in the background when a file is opened. At least that's what HFS on Mac has always done for files split into more than about eight extents. How big is the biggest file on your hard drive?

Re:Defrag (1)

Tubal-Cain (1289912) | more than 4 years ago | (#31291790)

Used to have a lot of 8 GB Stargate DVD ISOs. I've been splitting them up into individual ~1.7 GB episodes.

Re:Defrag (1, Insightful)

Anonymous Coward | more than 4 years ago | (#31291802)

Use a filesystem that doesn't fragment unless absolutely necessary, and it's extremely likely you will never notice a fragmentation related performance issue.

Name that filesystem (0)

Anonymous Coward | more than 4 years ago | (#31291914)

Name that filesystem. So far, I know it's not ext3, xfs, or jfs. All my torrent file chunks arrive in random order, and I guess Transmission doesn't pre-allocate the files, and stores 'em sparse instead. The result: pretty much worst-case fragmentation scenario. My filesystem is pessimized. I've had single files that take over 10 minutes just to delete (on xfs; jfs is waaay faster).

So I dedicate a partition to incoming torrents 'em and then copy 'em (or de-rar them) to my main storage when they're done, to defrag 'em. If I didn't have to do that, that would be sweet.

Re:Defrag (1)

owlstead (636356) | more than 4 years ago | (#31291530)

Yes, of course file sizes under 4096 bytes (not 4096K) do still exist. As stated in the article, most FS on hard disks already use 4096 byte block sizes. Thus it won't make much difference for defrag, unless you misalign the data in which case the defrag suddenly could take a *very* long time.

Not that I care, my system drive is already SSD anyways and I've not filled a drive to the brim for a long time. If I would still download movies they would certainly go to a WD green drive or anything like that without any small file sizes in sight (after doing the PAR2 / unzip etc. on a separate drive or - if big enough - the SSD of course).

XP users (3, Funny)

spaceyhackerlady (462530) | more than 4 years ago | (#31291262)

XP users do not need big hard drives to have problems.

...laura

Re:XP users (1)

MichaelSmith (789609) | more than 4 years ago | (#31291470)

But mac users go wild over big hard drives.

What About Linux Systems? (4, Interesting)

WrongSizeGlass (838941) | more than 4 years ago | (#31291304)

When this issue came up a few weeks ago there was a problem with XP and with Linux. I see they tackled the XP issue pretty quick but what about Linux?

This place [slashdot.org] had something about it.

Re:What About Linux Systems? (0)

Anonymous Coward | more than 4 years ago | (#31291376)

I see they tackled the XP issue pretty quick but what about Linux?
Linux is Free and Open Source, and proud of it. Fix it your goddamn self like any respectable Linux user.

Re:What About Linux Systems? (1)

Wesley Felter (138342) | more than 4 years ago | (#31291406)

Some distro installers do it right and some do it wrong. Give it a few years and I'm sure it will all be sorted out.

Re:What About Linux Systems? (5, Informative)

marcansoft (727665) | more than 4 years ago | (#31291438)

If Advanced Format drives were true 4k drives (i.e. they didn't lie to the OS and claim they were 512 byte drives), they'd work great on Linux (and not at all on XP). Since they lie, Linux tools will have to be updated to assume the drive lies and default to 4k alignment. Anyway, you can already use manual/advanced settings in most Linux parititioning tools to manually work around the issue.

Re:What About Linux Systems? (2, Informative)

hawkingradiation (1526209) | more than 4 years ago | (#31292084)

Linux has had 4096 block size in the kernel for ages. See this article [idevelopment.info] The issue being, as I recall somebody say, is that fdisk cannot properly do this. So use parted and you will be ok. ext3 and jfs and I suppose xfs and a whole bunch of others support the 4096 block size as well. BTW, who "tackled the XP issue pretty quick"? was it Microsoft or was it the hard drive makers. AFAIK a few hard drive manufacturers are emulating a 512 block size so it is not a complete fix.

Typo in Article (1)

HaeMaker (221642) | more than 4 years ago | (#31291324)

It says 4096K, they mean 4096 bytes (4K). Error is in the original.

1 byte = 10 bits? (1)

djlemma (1053860) | more than 4 years ago | (#31291340)

Stupid question- are bytes really 10 bits when talking hard drive capacity?

Is that some sort of checksum going on, or did the way computers store numbers change while I wasn't looking?

Re:1 byte = 10 bits? (3, Informative)

marcansoft (727665) | more than 4 years ago | (#31291458)

No, that's totally wrong. The drive may well use 10 magnetic "cells" to store a byte (e.g. with 8b10b modulation or something similar), but that's an implementation detail. As far as everything else is concerned, bytes are 8 bits.

Re:1 byte = 10 bits? (1)

noidentity (188756) | more than 4 years ago | (#31291462)

1 byte = N bits [wikipedia.org] , not necessarily 8. You're probably thinking of an octet [wikipedia.org] , which is always 8 bits (except in universes where the oct prefix doesn't mean 8).

Re:1 byte = 10 bits? (2, Informative)

tepples (727027) | more than 4 years ago | (#31291654)

But as I understand it, byte == octet on all hardware that 1. allows the general public to develop applications and 2. is not discontinued.

Re:1 byte = 10 bits? (1)

noidentity (188756) | more than 4 years ago | (#31292266)

And as I understand it, this year == 2010 for all people that 1. are members of the general public and 2. are not crazy.

A.D. vs. 8-bit bytes (1)

tepples (727027) | more than 4 years ago | (#31292638)

I'm not even sure whether calendars based on the year of the Lord are more widespread than the 8-bit byte. Certainly, 8-bit bytes are in use in the Middle East, parts of which use the Islamic calendar and the Jewish calendar.

Re:1 byte = 10 bits? (0)

WrongSizeGlass (838941) | more than 4 years ago | (#31291482)

The article claims that they use 10 bits per byte on a hard drive. The extra 2 bits are used for the ECC data ... they are not available for 'storage'. Of course, they claim a 1,000 GB drive = 1 TB which we all know is marketing, um, speak. A real TB = 1,024 GB (and I mean real GB's, not marketing speak GB's).

Re:1 byte = 10 bits? (2, Informative)

dfsmith (960400) | more than 4 years ago | (#31291484)

Depends on the drive. In recent electrical signalling (Gb ethernet, SATA/SAS, etc.) the 8b10b encoding scheme has been very popular; and is 10 bits to a byte. The extra bits are for recovering the clock signal. The HDD has to do the same, but the manufacturers don't have to adhere to any standards inside their case.

Now, if you're asking the question "how many bytes in a MB?" there is great debate. (The answer is, and has been from the first RAMAC*, 1,000,000. However, the binary bus people like to argue otherwise; and Microsoft Windows is one of the protagonists.)

* Okay, so technically the RAMAC was 5,000,000 words, where a word was 7 bits.

Re:1 byte = 10 bits? (2, Interesting)

KPexEA (1030982) | more than 4 years ago | (#31291750)

Group code recording? http://en.wikipedia.org/wiki/Group_code_recording [wikipedia.org] Back on my old Commodore Pet drive this was how they encoded data since too many zeros caused the head to lose it's place.

Your hard drive doesn't know about your FS (1, Informative)

Anonymous Coward | more than 4 years ago | (#31291344)

Guys, filesystem block/cluster size is not the same as hard drive sector size.
Jesus.

55GB Savings!!!!!!! (0)

Anonymous Coward | more than 4 years ago | (#31291380)

FTA:
"Each one of those ECC blocks is 40 bits wide; a 4096K block of data contains 320 bytes of ECC. Using Advanced Format's new 4096 sector size cuts the amount of ECC and Sync/DAM space significantly. According to WD, it needs just 100 bytes of ECC data per 4096K sector under the new scheme, a savings of 220 bytes."

For those not wanting to do the math...
220 extra bytes per 4096 bytes, in a 1 terabyte drive nets us 55GB more space.

From Google:
(220 / 4096) * 1 terabyte = 55 gigabytes

Re:55GB Savings!!!!!!! (0)

Anonymous Coward | more than 4 years ago | (#31291436)

Yeah, but you're still going to get shafted by the gigabyte to gibibyte exchange rate.

Re:55GB Savings!!!!!!! (1)

Tubal-Cain (1289912) | more than 4 years ago | (#31291604)

That's like complaining that you get shafted by the US gallon/imperial gallon exchange rate whenever you by milk/gas/whatever. Sufficiently few people sell things labeled with gigabibyte or imperial gallons that there is no confusion as to what you're getting.

Savings which do not get passed to the user (0)

Anonymous Coward | more than 4 years ago | (#31291972)

FTA:
"A WD10EARS and a WD10EADS have exactly the same unformatted capacity and Windows reports both drives offer 931GB of storage space."

TFA didn't say whether the advanced format drive is faster or not, more reliable or not, so as far as this reader can see is that users are paying more $$$ for less ECC bits and bigger sectors (which means less space for the same number of randomly-sized files).

Clever maybe for WD, but no obvious benefit.

Serious typos all over the place (0)

_pi-away (308135) | more than 4 years ago | (#31291392)

"where 1 byte = 10 bits."

as well as multiple prominent instances of "4096K"

Being off by 3 orders of magnitude is pretty hugely wrong, as is something as basic as how many bits are in a byte.

Re:Serious typos all over the place (0)

Anonymous Coward | more than 4 years ago | (#31291624)

a byte is not always 8 bit!

the size of a byte is dependent on the architecture used.

so: "1 byte = 10 bits" is perfectly OK!

(in contrast: what you mean is an octet. an octet is always 8 bits)

Re:Serious typos all over the place (1)

_pi-away (308135) | more than 4 years ago | (#31292322)

so: "1 byte = 10 bits" is perfectly OK!

I wasn't making a general computer science statement. In the architecture the article is talking about a byte is 8 bits, so no, it's not OK.

Dear Slashdot Sales Department (-1, Troll)

Anonymous Coward | more than 4 years ago | (#31291446)

1 No one except LOSERS uses Windows XP.

2. What is Slashdot's commission on these shameful book plugs?

Have a weekend, loozars.

Yours In Tashkent,
K. Trout

Re:Dear Slashdot Sales Department (4, Funny)

WrongSizeGlass (838941) | more than 4 years ago | (#31291552)

1 No one except LOSERS uses Windows XP.

Beck: I'm a loser, baby, 'cuz I'm usin' XP ...

2. What is Slashdot's commission on these shameful book plugs?

One free page from the book, randomly selected, until they've referred enough people to the publisher's site to receive the entire book. Unfortunately, it arrives as lose pages in no particular order. Cmdr Taco is never pleased with this.

Have a weekend, loozars.

Yours In Tashkent, K. Trout

Thanks, you too.

Re:Dear Slashdot Sales Department (4, Funny)

Arthur Grumbine (1086397) | more than 4 years ago | (#31292154)

Unfortunately, it arrives as lose pages in no particular order. Cmdr Taco is never pleased with this.

Have a weekend, loozars.

For all intensive purposes, you're post should of exploded the heads of any grammar nazis as they read they're screen. Which begs the question of what more damage could possibly be done to effect there sensibilities? Honestly, I could care less.

Re:Dear Slashdot Sales Department (2, Funny)

ryantmer (1748734) | more than 4 years ago | (#31292390)

Oh, how I wish I had mod points! :)

Re:Dear Slashdot Sales Department (0, Troll)

tzanger (1575) | more than 4 years ago | (#31292944)

It's "for all intents and purposes" not "for all intensive purposes." When you say it you can get away with it wrong, but when you write it you just look dumb.

Re:Dear Slashdot Sales Department (4, Funny)

Arthur Grumbine (1086397) | more than 4 years ago | (#31293132)

It's "for all intents and purposes" not "for all intensive purposes." When you say it you can get away with it wrong, but when you write it you just look dumb.

Indeed. Its a common mistake, but you're vigilance is dually noted. I'm just glad I didn't loose all credibility by making alot more mistakes.

Re:Dear Slashdot Sales Department (1, Informative)

Anonymous Coward | more than 4 years ago | (#31293526)

Be careful, tzanger, I think Mr. Grumbine may be bating you!

Speed is irrelevant (4, Interesting)

UBfusion (1303959) | more than 4 years ago | (#31291492)

I can't grasp why all (these specific and most) benchmarks are so much obsessed with speed. Regarding HDs, I'd like to see results relevant to:

1. Number of Read/Write operations per task: Does the new format result in fewer head movements, therefore less wear on the hardware, thus increasing HD's life expectancy and MTBF?

2. Energy efficiency: Does the new format have lower power consumption, leading to lower operating temperature and better laptop/netbook battery autonomy?

3. Are there differences in sustained read/write performance? E.g. is the new format more suitable for video editing than the old one?

For me, the first issue is the more important than all, given that owning huge 2T disks is in fact like playing Russian roulette: without proper backup strategies, you risk all your data at once.

Re:Speed is irrelevant (0)

Anonymous Coward | more than 4 years ago | (#31291694)

Well, this is Slashdot, of course you're going to see useless benchmarks of desktop hardware, laced with advertisements, and snarky comments about Linux and Windows.

What you want is spec.org results for a storage array with new 4k drives in it, and that is WAAAAAAAY over your average /. reader's head.

Re:Speed is irrelevant (1)

Surt (22457) | more than 4 years ago | (#31292474)

I think the answer is that:

#1: only an idiot relies on the MTBF statistic as their backup strategy, so speed matters more (and helps you perform your routine backups faster).

#2: for energy efficiency, you don't buy a big spinning disk for your laptop, you use a solid state device.

#3: wait, i thought you didn't want them to talk about performance? This format should indeed be better performing for video editing, however, since you asked.

Re:Speed is irrelevant (1)

russotto (537200) | more than 4 years ago | (#31292534)

1. Number of Read/Write operations per task: Does the new format result in fewer head movements, therefore less wear on the hardware, thus increasing HD's life expectancy and MTBF?

Yes. By packing the bits more efficiently, each cylinder will have more capacity, thus requiring fewer cylinders and fewer head movements for any given disk capacity.

2. Energy efficiency: Does the new format have lower power consumption, leading to lower operating temperature and better laptop/netbook battery autonomy?

Probably slightly but not significantly.

3. Are there differences in sustained read/write performance? E.g. is the new format more suitable for video editing than the old one?

There should be.

Re:Speed is irrelevant (1)

jedidiah (1196) | more than 4 years ago | (#31292964)

> I can't grasp why all (these specific and most) benchmarks are so much obsessed with speed. Regarding HDs, I'd like to see results relevant to:

You really want to be able to copy your stuff. If your stuff is 2TB, then it makes sense that you would want to copy that 2TB in a timely manner.

So yeah... speed does matter. Sooner or later you will want that drive to be able to keep up with how big it is.

And for an overview that knows how to do math... (5, Informative)

JorDan Clock (664877) | more than 4 years ago | (#31291666)

Anandtech [anandtech.com] has a much better write up on this technology, complete with correct conversions from bits to bytes, knowledge of the difference between 4096 bytes and 4096 kilobytes, and no in-text ads.

Re:And for an overview that knows how to do math.. (5, Funny)

WrongSizeGlass (838941) | more than 4 years ago | (#31291776)

That article doesn't sound like fun at all. How are we supposed to mock it if they haven't made multiple errors, typos and other such blunders? We're smug, semi-knowledgeable 'first posters' with nothing better to do than critique articles that we were too lazy to read or too incompetent to write. I'm going to go wait on the homepage to refresh so I can jump into the next thread without a second thought.

Oh noez! (1)

Kral_Blbec (1201285) | more than 4 years ago | (#31292434)

Unfortunately, this creates a problem for Windows XP users. The good news is, Western Digital has already solved the problem

Is there a particular reason that we should care that a new technology isn't backwards compatible with an obsolete technology? Especially in light that it actually is compatible?

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...