×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

330 comments

Ok... (-1, Troll)

Eon78 (19599) | more than 12 years ago | (#2531741)

... but does someone needs it?

Re:Ok... (1, Insightful)

Anonymous Coward | more than 12 years ago | (#2531758)

How about the people who filled their 30mb drives in seconds a few years ago? The more things a computer can do, the bigger the files get. All I need now is a slightly bigger hard drive to take advantage...

Re:Ok... (1)

Eon78 (19599) | more than 12 years ago | (#2531766)

I won't say that these file sizes won't be used in the future, but can someone point me to an application that would really benefit from it right now?

Re:Ok... (2, Insightful)

PyroMosh (287149) | more than 12 years ago | (#2531805)

No, not really. But you must remember, that if you have a need and you wait until you have that need to develop a solution for it, you have developed the solution too late. Remember "640K sould be enough for anyone?" That's an example of not planning for the future adequately. So, no it you're looking for a here and now reason for it, you're not going to find it. But remember, that that's not the point.

Re:Ok... (1)

Lithian2 (459137) | more than 12 years ago | (#2531775)

This could tatally be a "good thing" with regards to video. Just think of taking all your DVD's and merging them in to one big file that just plays... okay, thats what a playlist is for but surely an insanely high quality video format would need this sort of filesize.

Of course, we all have eyes capable of appreciating this insane quality... dont we?

Re:Ok... (3, Insightful)

astrophysics (85561) | more than 12 years ago | (#2531800)

There are about 10^10 solar masses of mass in a large galaxy like our own. At ~10^33 g/ solar maxx, and 10^23 atoms per gram, That's 10^66~2^219 particles in our galaxy. Beleive me, scientists will make use of as much computing power, RAM, and storage space as they can get their hands on. If only the limiting factor were operating system limitations rather than the more practicalities realities of funding and costs of hardware.

Re:Ok... (1)

tunah (530328) | more than 12 years ago | (#2531827)

10^23 atoms/gram => avg molar mass of 6 (lithium). Is this a bit light?

Re:Ok... (1, Insightful)

Anonymous Coward | more than 12 years ago | (#2531862)

Heavy elements are pretty rare compared with hydrogen & helium, so an average atomic weight of 6 is feasible imho.

I would still like to see where the figures come from though.

Re:Ok... (3, Informative)

kkenn (83190) | more than 12 years ago | (#2531819)

Well, it's good to see that Linux has caught up, but the article is not correct that Linux is the first OS to support 48-bit ATA; FreeBSD has had this support for over a month now.

See for example: this file [freebsd.org] which is one of the files containing the ATA-6r2 code, committed to FreeBSD on October 6.

512? That can't be right. (4, Funny)

fonebone (192290) | more than 12 years ago | (#2531747)

The 144 Petabyte figure is obtained by raising two to the power of 48, and multiplying it by 512.

Hm, that can't be right, I swear I heard it was supposed to be two raised to the power of 50, multiplied by 128.. hm.

Re:512? That can't be right. (0)

Anonymous Coward | more than 12 years ago | (#2531836)

Nope. You're all wrong. 144 petabytes is obtained by raising two to the power of 64 (which is two raised to the power of 6) and then divided by 128 (which is two to the power of 7, which is a prime number). Primes are everywhere. My, that's amazing.

Re:512? That can't be right. (2, Informative)

ericzundel (524648) | more than 12 years ago | (#2531871)

The 2^48 figure is the number of blocks that can be accessed on the IDE disk from what I can gather.

2^48 blocks * 512 bytes/block = 144115188075855872 bytes

second post! (-1, Troll)

Anonymous Coward | more than 12 years ago | (#2531748)

test

Nice! (1)

Per Wigren (5315) | more than 12 years ago | (#2531749)

..."but Linux recently became the first desktop OS to support enormously large file sizes"...

Re:Nice! (1, Informative)

Anonymous Coward | more than 12 years ago | (#2531860)

unless "recently" is 3 years ago... I can name at least on desktop OS which did that before.

BeOS.

"doing it"? (0)

autopr0n (534291) | more than 12 years ago | (#2531761)

The slashdot writeup here is pretty terrible. doing what at 144 petabytes?

From the text, it seemed as though someone build a Linux machine with 144pb of Ram or something.

Who gives a crap (0)

Anonymous Coward | more than 12 years ago | (#2531765)

How about a kernel that's secure and stable so we can have a nice secure webserver. All many of us want is a nice secure, stable, linux webserver, that will run apache, php, mysql happily and forever. No more junky root exploits its getting tiresome.

Re:Who gives a crap (0)

Cheesy Fool (530943) | more than 12 years ago | (#2531777)

Upgrade to 2.2.20 then.

Re:Who gives a crap (0)

Anonymous Coward | more than 12 years ago | (#2531781)

Do these linux kernels seem a bit like MS patches to you or is it just me?

Just wondering... (1)

pigeonhk (42292) | more than 12 years ago | (#2531767)


"We almost forgot to mention this, but Linux recently became the first desktop OS to support enormously large file sizes."

So what about non-desktop OS then?

Thanks, but no thanks (0)

Anonymous Coward | more than 12 years ago | (#2531768)

I really don't have any pr0n larger than 600MB, but thanks for asking...

Slashbox (0, Offtopic)

Faux_Pseudo (141152) | more than 12 years ago | (#2531770)

I learned about theregister.co.uk a long time ago becuase of slashdot. Now we have a billion choices for slashboxes but how come we dont have one for the reg? Every /. reader worth their salt reads the reg and its time it got a slashbox. While we are at it lets add theinquirer.net as well.
This would be a boon for every linux and hardware buff.

TheRegister no speak with forked tongue .... (-1, Offtopic)

Anonymous Coward | more than 12 years ago | (#2531815)


Petabytes? who gives a rats?

As we all know, Lunix is the most secure OS
in the known universe - The Lunix zealots
among you should carry on smoking the
collective crack pipe.

It amazes me how UNIX vunerabilities are never
highlighted with as much fanfare on /. lets
blame billG for the following while were at
it:

Compendium of *nix lpd vulnerabilities
By Thomas C Greene in Washington
Posted: 07/11/2001 at 07:24 GMT

So many vulnerabilities affecting the lpd (line printer daemon) have come to light in recent months that CERT/CC has issued a compendium advisory urging all users and admins to review their system configurations and patch status.

"All of these vulnerabilities can be exploited remotely. In most cases, they allow an intruder to execute arbitrary code with the privileges of the lpd server," CERT explains.

A table provided in the above advisory references systems with their correponding individual advisory.

Affected systems include:

--BSDi BSD/OS Version 4.1 and earlier
--Debian GNU/Linux 2.1 and 2.1r4
--FreeBSD All released versions FreeBSD 4.x, 3.x, FreeBSD 4.3-STABLE, 3.5.1-STABLE prior to the correction date
--Hewlett-Packard HP9000 Series 700/800 running HP-UX releases 10.01, 10.10, 10.20, 11.00, and 11.11
--IBM AIX Versions 4.3 and AIX 5.1
--Mandrake Linux Versions 6.0, 6.1, 7.0, 7.1
--NetBSD 1.5.2 and earlier
--OpenBSD Version 2.9 and earlier
--Red Hat Linux 6.0 all architectures
--SCO OpenServer Version 5.0.6a and earlier
--SGI IRIX 6.5-6.5.13
--Sun Solaris 8 and earlier
--SuSE Linux Versions 6.1, 6.2, 6.3, 6.4, 7.0, 7.1, 7.2

Quite a list -- no doubt soon to be framed on Bill Gates' office wall. ®

Re:TheRegister no speak with forked tongue .... (0)

Anonymous Coward | more than 12 years ago | (#2531874)

It amazes me how UNIX vunerabilities are never
highlighted with as much fanfare on /.


I think the moderator just proved your point:

-1, Offtopic !

Shhhh! mustn't let anyone know.

Re:TheRegister no speak with forked tongue .... (0)

Anonymous Coward | more than 12 years ago | (#2531891)

>>It amazes me how UNIX vunerabilities are never
>>highlighted with as much fanfare on /.
>
>I think the moderator just proved your point:
>
>-1, Offtopic !
>
>Shhhh! mustn't let anyone know.

LOL!

Yah! I noticed. Can't say it suprises me though.

" A collective illusion of security has get to be a good thing " (TM)

so what? (0)

Anonymous Coward | more than 12 years ago | (#2531772)

I Couldn't care less. Thats just like saying my
processor has 2^64 bytes of address space; will
I ever use it all?

- note: this comes from a guy who never ever
thought that he would see one of his computers
with 2^32 bytes of ram storage ...

Shaky figures. (0)

Anonymous Coward | more than 12 years ago | (#2531773)

They claim a petabyte == 10^15, but that's not true. Just as a kilobyte = 2^10 and a gigabyte is 2^30, a petabyte is 2^50.

So they really got 128 petabytes, if you ask me.

And what's this "2^48 * 512" deal? Why didn't they just say "2^50 * 128" which is more to the point?

Re:Shaky figures. (0)

jquirke (473496) | more than 12 years ago | (#2531849)

The 2^48 * 512 is because of the 48-bit address bus addresses _sectors_ not bytes. A sector is usually 512 bytes, so multiply the potential number of sectors by the size of a sector to get the total capacity.

But yeah, I see what you were trying to say, the prefix to the units is always 2^(width_in_bits%10)

Not so Happy (0, Offtopic)

HappyPerson (525201) | more than 12 years ago | (#2531774)

Anyone Know of a flavor of linux that offers a simple Stable Secure Linux Webserver without all the junk piled in?

Re:Not so Happy (0)

Anonymous Coward | more than 12 years ago | (#2531786)

Anyone Know of a flavor of linux that offers a simple Stable Secure Linux Webserver without all the junk piled in?



*BSD

Re:Not so Happy (1)

Rendus (2430) | more than 12 years ago | (#2531799)

You would probably be happy with Slackware - www.slackware.com. Install only what you need.

Finally! (2)

case_igl (103589) | more than 12 years ago | (#2531778)

This is what we've all been waiting for!

Now Linux can really own as a legitimate desktop OS!

Seriously though...Isn't there a better place for someone who has the time to contribute? I'd rather see a better desktop environment, a better E-mail package, etc...

(Flame away, all of you running on 200Mhz machines with a four gig drive who will post about how awesome this new support is!)

damn (0)

Anonymous Coward | more than 12 years ago | (#2531779)

i finally have a place to put all that porn!!!

Somewhat misleading (5, Interesting)

nks (17717) | more than 12 years ago | (#2531780)

The IDE driver supports such rediculously large files, but no filesystem that I know of currently does, not to mention the buffer management code in the kernel.

So does linux support 18pb files? kind of -- pieces of it do. But the system as a whole does not.

Or in other words... (4, Interesting)

PD (9577) | more than 12 years ago | (#2531783)

2.197265625 trillion Commodore 64's.

98.7881981 billion 1.44 meg floppy disks.

1.44 million 100 gig hard drives

or

3.5 trillion 4K ram chips (remember those?)

Re:Or in other words... (2)

PD (9577) | more than 12 years ago | (#2531788)

I mean 35 trillion 4K ram chips.

Re:Or in other words... (1)

koekepeer (197127) | more than 12 years ago | (#2531803)

there's strength in numbers.

given enough monkeys and typewriters...

hehe

meneer de koekepeer

Re:Or in other words... (-1)

Fucky the troll (528068) | more than 12 years ago | (#2531911)

I thought I'd better get the monkeys+typewriters=internet joke in before anyone else does.

worm+typewriter=linux admin

watchit (2, Insightful)

mr_exit (216086) | more than 12 years ago | (#2531787)

remember when 640k was enough for everybody?
well i for one am scared by the fact that oneday soon 144pentabyte files will seem small

- Lord of the Rings is boring. There is a distinct lack of giant robots in it. Good movies have giant robots, bad movies don't. -james

Can we see a SLASHDOT version of linux (1)

HappyPerson (525201) | more than 12 years ago | (#2531793)

Can we see a SLASHDOT version of linux that's made to be a secure stand alone webserver (apache, mysql, php) Forget the banner ads I'd pay you some money if you could create an ISO for us /.'ers

You guys already know how so why not share?

WTF??? (-1, Troll)

Anonymous Coward | more than 12 years ago | (#2531834)

Are you a complete retard? From your user comments it seems that way. What is this slashdot version of Linux you speak of? All these fuckheads that run the site do is hack together shitty Perl code, lick each others shit encrusted assholes, and post boring and repetitive stories for morons like you to respond to.

Re:WTF??? (0)

Anonymous Coward | more than 12 years ago | (#2531842)

Then why are you here? Ignorant troll.

Performace (1)

Lithian2 (459137) | more than 12 years ago | (#2531794)

I am just wondering here, is there some sort of performace hit in addressing normaly files while adding in support for this "petabyte" feature.

Surely the amount of bits needed to address this is going to increase and more data for addressing means less data for good ol file transfer.

Is this going to be a noticeable difference or am i just beeing a bit whore?

XFS (5, Informative)

starrcake (25459) | more than 12 years ago | (#2531796)

http://oss.sgi.com/projects/xfs/features.html

XFS is a full 64-bit filesystem, and thus, as a filesystem, is capable of handling files as
large as a million terabytes.

263 = 9 x 1018 = 9 exabytes

In future, as the filesystem size limitations of Linux are eliminated XFS will scale to the
largest filesystems

Re:XFS (1, Informative)

Anonymous Coward | more than 12 years ago | (#2531832)

You make it sound like XFS has been doing this for a while. But no:

In future, as the filesystem size limitations of Linux are eliminated XFS will scale to the largest filesystems

Before this, you couldn't access drives bigger than 128GB, and a 64-bit filesystem wouldn't have helped. You make it sound like this update was for a specific filesystem, but that's not true; this update was at the device level.

Re:XFS (1)

ZerothAngel (219206) | more than 12 years ago | (#2531851)

Does this include modifications to various syscalls? Last time I tried porting something to Linux, I was dismayed to find that off_t (used in stat(2), lseek(2), etc.) was only 32-bits on x86 platforms.

It doesn't seem very useful to have a 64-bit filesystem if your applications are limited to 32-bit operations.

Re:XFS (0)

Anonymous Coward | more than 12 years ago | (#2531859)

Try a 2.4.x kernel. I've got quite a few files >2GB on my linux boxen

Re:XFS (0)

Anonymous Coward | more than 12 years ago | (#2531864)

You need to compile with large file support. The functions have to be 32-bit for backwards compatibility.

http://www.suse.de/~aj/linux_lfs.html [www.suse.de] has more info.

It should also be noted that XFS isn't the only 64-bit filesystem. Reiserfs is too, and I'm sure (thought I don't know for a fact) that most of the other new ones are as well.

working with large files (1)

nandix (150739) | more than 12 years ago | (#2531804)

this might be usefull for some very large database tables (assuming you don't use rawdevices).
that said, this is when i turn this into a mini ask-slashdot:

while i have no problems writing/reading large files (i.e., >2GB), most regular linux software can't deal with them
for instance, i can't upload them with ftp. i'm having this problem with a mysqldump file that's part of a system backup.
right now it's not a real problem since i can gzip the file and the size goes down to 250MB aprox, but how do you guys handle large files in linux anyway?

Article got it wrong on BeOS - 18 EXAbytes! (5, Informative)

Snard (61584) | more than 12 years ago | (#2531806)

Just a side note: BeOS has support for files up to 18 exabytes, not 18 petabytes, as stated in the article. This is roughly 18,000 petabytes, or 2^64 bytes.

Just wanted to set the record straight.

British an American exabytes... (0)

Anonymous Coward | more than 12 years ago | (#2531902)

Maybe it's the same thing as with billions? (10^9 while wearing a gun, but 10^12 while smiling into a CCTV camera)

OK this is great... (5, Insightful)

TheMMaster (527904) | more than 12 years ago | (#2531807)

Now, I can really imagine someone that buys a 144Pb drive (array) and will use IDE?? I would personally go for SCSI there ;-)

What I am really wondering is: is there at the current moment ANY company/application/whatever that required this amount of storage? I thought that even a large bank could manage with a few TB's
Not intended as a flame, just interested

but still, this is a Good Thing (r)

Somebody will probably correct me ... (3, Interesting)

King Of Chat (469438) | more than 12 years ago | (#2531841)

... but a couple of years ago, I was investigating OODBMSs. The sales bloke for (I think it was) Objectivity claimed that CERN were using their database for holding all the information from the particle detector things - which I can see being a shedload of data (3d position + time + energy). He was suggesting figures of 10 petabytes a year for database growth (so it must be frigging huge by now).

Of course, this was probably salescrap. Does anyone know the truth on this?

Re:Somebody will probably correct me ... (5, Insightful)

Anonymous Coward | more than 12 years ago | (#2531903)

Of course, this was probably salescrap. Does anyone know the truth on this?

The BABAR experiment [stanford.edu] at SLAC [stanford.edu] is using Objectivity for data storage. Unfortunately, I cannot find a publicly available web page about computing at BABAR right now.

The amount of data BABAR produces is in the order of magnitude of 10's of terabytes per year (maybe a hundered), and even storing this amount in Objectivity is not without problems. The LHC [web.cern.ch], which is currently under construction, will generate much more data than BABAR, but even if they reach 10 petabytes per year one day, I very much doubt that they will be able to store this in Objectivity.

Re:OK this is great... (2, Informative)

rhodespa (152821) | more than 12 years ago | (#2531867)

The bank I work for currently stores 1.5 Tb a day worth of data. Almost none of it is ever looked at again, but a huge proportion of it is required by regulators. Of course this all goes on tape, since there is no requirement for speedy access.

Random statistics.... (4, Funny)

tunah (530328) | more than 12 years ago | (#2531810)

Let's say you have this 144 petabyte drive. Okay it's friday, time to back up.

So you whip out your two hundred million cd recordables, and start inserting them. Let's say you get 1 frisbee for each 25 700Mb CDs.

This leaves you with eight million frisbees.

That's a stack 13 kilometres high.

So who needs this on a desktop OS again?

Re:Random statistics.... (5, Funny)

ColaMan (37550) | more than 12 years ago | (#2531876)

So you whip out your two hundred million cd recordables, and start inserting them. Let's say you get 1 frisbee for each 25 700Mb CDs.

Silly Moo!

You back it up to your *other* 144 petabyte drive!

FreeBSD had it first. (1, Informative)

Anonymous Coward | more than 12 years ago | (#2531816)

FreeBSD had it first. For over a month. Read the committer CVS Logs and weep, penguin boys.

http://www.freebsd.org/cgi/cvsweb.cgi/src/sys/de v/ ata/ata-disk.c -> version 1.114

Re:FreeBSD had it first. (0)

Anonymous Coward | more than 12 years ago | (#2531826)

André 'Crybaby' Hedrick: We're the leaders, wait for us!

Re:FreeBSD had it first. (0)

Anonymous Coward | more than 12 years ago | (#2531887)

the difference here,

no one cares about BSD anymore...

144 PB, not really (5, Insightful)

tap (18562) | more than 12 years ago | (#2531818)

Sounds like all they are saying is that the new
IDE driver can support 48 bit addressing. With 2^48 seconds of 512 bytes, you get 144 PB. But there are a LOT of other barriers to huge filesystems or files.

For instance, the Linux SCSI driver has always support 32 bit addressing, good enough for 2 terabytes on a single drive. But until recently, you couldn't have a file larger than 2 gigabytes (1024x smaller) in Linux. I think that the ext2 filesystem still has a limit of 4 TB for a single partition.

So while the IDE driver may be able to deal with a hard drive 144 PB in size, you would still have to chop it into 4 TB partition.

Great, now I can store enough streaming porn (-1, Redundant)

yatest5 (455123) | more than 12 years ago | (#2531823)

to last a whole lifetime - hmmm, now just need that 144 petabyte hard dick, er, disk...

Uh, no? (3, Informative)

srichman (231122) | more than 12 years ago | (#2531830)

Correct me if I'm wrong, but isn't this very very misleading? The article states that the Linux IDE subsystem can now support single ATA drives up to 144 petabytes (i.e., Linux ATA now has 48 bit LBA support), but my understanding is that many other aspects of the the Linux kernel limit the maximum file size to much less.

I'm looking at the Linux XFS feature page [sgi.com], which states:

Maximum File Size
For Linux 2.4, the maximum accessible file offset is 16TB on 4K page size and 64TB on 16K page size. As Linux moves to 64 bit on block devices layer, file size limit will increase to 9 million terabytes (or the system drive limits).

Maximum Filesystem Size
For Linux 2.4, 2 TB. As Linux moves to 64 bit on block devices layer, filesystem limits will increase.

My understanding is that the 2TB limit per block device (including logical devices) is firm (regardless of the word size of your architecture), and unrelated to what Mr. Hedrick did. Am I wrong? Does this limit disappear if you build the kernel on a 64-bit architecture?

And, on 32-bit architectures, there's no way to get the buffer cache to address more than 16TB.

144 ? PB (1)

dabadab (126782) | more than 12 years ago | (#2531837)

It should be 128 PB as 2^48 * 512 kB = 2^48 * 2^9 B = 2^57 B = 2^7 PB = 128 PB. Q.E.D.
(2^48 is the number of blocks (since 48 bit addressing is used) and each block contains 512kB of data.)

Re:144 ? PB (1)

bowb (209411) | more than 12 years ago | (#2531869)

Hard drive capacities are measured in powers of 10 (go to any HD maker's site and look at their spec sheets; they always have a footnote saying this). A petabyte, when talking about HDs, is 10^15 bytes precisely.

With 48 bits, and 512 bytes/sector, you have

2^48 * 512 = 144 115 188 075 855 872

which is enough to address 144 (HD) petabytes

144 PB = 144 000 000 000 000 000 bytes

144 or 128 petabytes? (2, Interesting)

ukryule (186826) | more than 12 years ago | (#2531838)

Is 1 petabyte 1000^5 or 1024^5? (i.e. is it 10^15 or 2^50)?)

If 1kB = 1024 Bytes, then I've always assumed that 1MB = 1024kB (instead of 1000kB), 1GB = 1024MB, and so on.

Normally this doesn't make that much difference, but when you consider the cost of a 16 (144-128) petabyte hard drive, then the difference is more important :-)

Finally something is done right.... (1, Insightful)

Anonymous Coward | more than 12 years ago | (#2531844)

The drive size limitations for IDE drives has been driving me nuts for years. First we had 0.5G, then 2G, then 8G, 32G and finally 128G (or 137G actually)... every time the barrier was moved forward to be 4 times larger than before, which meant we needed a knew kludge every 2 years. At least now it'll take a little bit longer, before we need yet another addressing scheme. By the way, this would be an excellent opportunity to nuke the old DOS partition table format (happily I know *BSD never needed it) once and for all, as well.

Btw, don't get messed up with two distinct things: 1) Being able to address 2^48 sectors on an IDE disc, and 2) having a filesystem that can handle files as large as 2^48 sectors.

Fantastic! Only one problem..... (0)

Anonymous Coward | more than 12 years ago | (#2531845)

Do you wish to save "Entire knowledge of the Universe.txt" ? This may take some time.....

You clicked YES. Thankyou, please get along with the rest of your life while I save your file....

Very nice, but not really what I'd like to see... (2, Interesting)

shic (309152) | more than 12 years ago | (#2531857)

From my perspective, while obscenely large limits on file system sizes are no bad thing, I'm more interested by the prospect for scalability in the context of realistic problems. I see much larger challenges in establishing systems to maximally exploit locality of reference. I'd also like to see memory mapped IO extended to allow direct use to be made of entire large scale disks in a single address space using a VM-like strategy ... but I guess this will only be deemed practicable once we're all using 64 bit processors. Are there any projects to approximate this on 32 bit architectures?

Files vs Disks (1)

instinc (457765) | more than 12 years ago | (#2531865)

While we can access disks as file, the question that arise is, even though the IDE drivers have such large addressing capabilities, are the file handles and the file system able to support files, and I really say files here, larger than 4GB?

I remember creating a tarball file (due to a bug in tar) larger than 4GB and I couldn't even access/delete/unlink it anymore from my hard drive, linux would simply not allow it.

1st desktop OS? Well, not quite. (5, Informative)

mr (88570) | more than 12 years ago | (#2531878)

Before you start thumping your chest about how superior or cutting edge *Linux is, go look at these two links
A slashdot story pointing out how without the FreeBSD ATA code, the Linux kernel would be 'lacking'
The FreeBSD press release announcing the code is stable [freebsd.org]

If The Reg actually researched the story, Andy would have notice it is not a 'first' but more a 'dead heat' between the 2 leading software libre OSes. Instead, The Reg does more hyping of *Linux.

perverts ! (-1, Troll)

Anonymous Coward | more than 12 years ago | (#2531882)

You nasty Linux people. You keep your hard disks away from those little children !

if windows could handle it... (1)

motherhead (344331) | more than 12 years ago | (#2531884)

could you imagine the bloat Microsoft would put into Office XP 2002?

it would come on 20 DVDs and Word itself would be a 22Gb install. the paperclip would be all nVidia 3-D rendered and exponentially more useless and annoying.

Pebibytes? (4, Informative)

Rabenwolf (155378) | more than 12 years ago | (#2531897)

And this is even more impressive in pebibytes, too.

Well, according to the IEC standard [nist.gov], one petabyte is 10^15 (or 1e+15) bytes, while one pebibyte is 2^50 (or 1.125899e+15) bytes.

So 144 petabytes is 1.44e+17 bytes or 127.89769 pebibytes. Can't say that's more impressive tho. :P

File sizes (1)

FiendBeast (461063) | more than 12 years ago | (#2531913)

How many of these files can Linux support? Is there a limit on the size of the disk that can be used?
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...