Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Massive Storage Advances

CmdrTaco posted more than 13 years ago | from the fits-in-the-palm-of-your-hand dept.

Technology 279

pra9ma writes: "Scientists from Keele University, in England, have suceeded creating a system that enables up to 10.8 terabytes of data to be stored in an area the size of a credit card, with no conventionally moving parts. This along with 3 other forms of memory which could revolutionize storage. The company said the system could be produced commercially within two years, and each unit should cost no more than $50 initially, with the price likely to drop later. " I'm unconvinced about their compression algorithm, but if it works, this is gonna be amazing.

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered


sure (1)

Anonymous Coward | more than 13 years ago | (#436032)

Great, just when napster is closing up shop. Now what do I use 10 terabytes for?

I, too, can offer this technology (1)

Anonymous Coward | more than 13 years ago | (#436034)

to Slashdot.

Stop posting ridiculous stories like this, and you will save terrabytes in bandwidth and storage requirements for all the "you've been had" comments.

Isnt it impossible? (1)

Anonymous Coward | more than 13 years ago | (#436035)

lets say we have:

11010111 10110100

Those are our two bytes.
How would you record the difference?

The 'difference' would be:

Now how is that take up any less space? Just a little food for thought.. no?

OH MY GOD! (1)

Anonymous Coward | more than 13 years ago | (#436036)

Nothing like technology induced orgasms... yummy

Napster frigging alternatives.. (1)

kidlinux (2550) | more than 13 years ago | (#436038)

Go to AudioGalaxy [audiogalaxy.com]. Once you do, trust me, you'll just laugh that Napster is goin' down. I've converted many of my friends, all who say it's much better than Crapster.

Re:This may never happen . . . (1)

kidlinux (2550) | more than 13 years ago | (#436039)

Yeah, so that when they run out of oil,and have sapped us from all our money on oil, they'll bring that battery patent out of the basement, mass produce the things, and then suck money out of us using that.

Unlikely (1)

flanker (12275) | more than 13 years ago | (#436044)

"No conventionally moving parts"? How does the little fiber-optic-thingie-in-goo read the surface? Assuming you are talking about a single layer of storage here... 10 terabytes in something like 50 square cm? Is that something like 200 quadrillion bits per square meter? 1 bit of information requires only .00001 nanometers? Riiigghtt. Is that molecularly possible? Oh yes, sorry, I forgot, they're running it through WinZip first.

what about IO speed? (1)

deltavivis (26381) | more than 13 years ago | (#436050)

I'm not quite as quick as most here to completely dismiss this stuff as impossible. They might be eggarating a bit, but more likely than not they've got some pretty clever ways of storing a lot of data on a small area over at Keele. People have proposed gee-whiz storage stuff for years, gotten working prototypes in the lab, but have yet to produce something which has the combo of cheapness and speed of the trusty harddrive. If they have figured out a way to get storage anywhere near what they are talking about, at anywhere near $50, conventional wisdom leads me to guess that the read speed has got to be abysmal. I'm not ready to throw out my harddrives quite yet...

Re:Nonsense (1)

Tyriphobe (28459) | more than 13 years ago | (#436051)

"no conventionally moving parts" - yeah, that's the best bit of confus-o-matic speech. Then they go on to say it means that every cm^2 has a moving part, but it's not "conventional". I guess if it were a fiber+gear, it would be conventional. As it stands, maybe they just randomly push the fiber around until it illuminates some of the data you want?

So to repeat what every other poster has already said, which putz put this story up? I could point you to stories from The Onion that make more sense.

It won't replace hard drives... (1)

Azza (35304) | more than 13 years ago | (#436054)

Data access time is around 100 Mb/sec.

Don't expect it to replace hard drives any time soon, let alone RAM. 100Mb/sec is pretty slow, compared to, say Ultra-2 SCSI (640Mb/sec), or Ultra ATA/66 (528Mb/sec).


Skynet (37427) | more than 13 years ago | (#436056)

Just in case someone accidentally clicks it in a drunken stupor. ;-)

Re:sure (1)

dougmc (70836) | more than 13 years ago | (#436070)

Indeed. The Napster server I just connected to only has 6.2 terabytes of mp3s online ...

[1478472 songs in 8940 libraries (6293 gigs)] [U/0 D/0]

and much of that's probably duplicates ...

Re:hmmm (1)

Kwikymart (90332) | more than 13 years ago | (#436079)

Not everything that is in a concept stage is vaporware. Things take time to evolve. Magical technologies dont just get off the drafting board and into consumer hands overnight. Its unfair to label it vaporware because you have the ecomomic attention span of a walnut. *Patience*, under development does not mean vaporware.

10.8 terabytes with today's systems? (1)

Kreeblah (95092) | more than 13 years ago | (#436082)

A few questions. First, is there any widely-used operating system (or BIOS, for that matter) that can even address, let alone use a 10.8 terabyte drive? Second, I see no reference as to what kind of interface will it have (IDE, SCSE, USB *shudder*, etc.). Third, since this will have no moving parts, could this be considered solid state storage? Finally, since there are no moving parts, wouldn't this have incredibly quick access times?

Screw vaporous, I'm just plain tired of these... (1)

gvonk (107719) | more than 13 years ago | (#436086)

Now, it's not the fault of the editors, but I am getting really tired of all of these people with their "high capacity," "cheap" memory. Just tell me when you have a WORKING MODEL that I CAN ORDER from your website. Until then, I am going to sleep.

but that would mean... (1)

gvonk (107719) | more than 13 years ago | (#436087)

dude, the hard drive companies are just gonna pick it up and start making it... end of story. anyone who is smart just buys their possible competitors and makes off like a bandit.

[ot] Your sig is US-101k-centric (1)

gvonk (107719) | more than 13 years ago | (#436088)

I am mad! What if I have my own layout or Dvorak or something and I want to email you. I'm suing!!!

500:1 compression can easily be achieved on text (1)

hexx (108181) | more than 13 years ago | (#436089)

Using my semi-linear fergulseon trinaric algorithms.
Couple that with a succint arbitrary byte foam agent through a C.K.I. softlense and you're at over 5000:1.

Of course, I won't show anyone anything regarding the science behind these claims.
It's just a claim.

Anyone know a good V.C.?

Re:Some questions I have about this (1)

jred (111898) | more than 13 years ago | (#436093)

Would it even need to be rewritable? Most tech comes out initially w/ read-only/write once (CD, DVD) drives, with recorders or rerecorders (I guess that should ahve been rewriters) priced way out of reach for Joe Schmoe. But the prices come down & they become readily available. Besides, what was it? 10tb? With that kind of space you'd rarely *need* to erase anything.

www.cautioninc.com [cautioninc.com]

SCORE! (1)

bandit450 (118835) | more than 13 years ago | (#436098)

Yes, finally a cheap and effective way to store some of my MP3 collection!

But then again, you realize what I said once about 5 and a quarter floppy disks..."Holy cow! I'm never going to need more than 10 of these things! They're HUGE!"

Re:The compression algorithm... (1)

dtr21 (120759) | more than 13 years ago | (#436101)

I don't know about that. I remember in a recent Information Theory course I did at Uni, we learnt that the information content of an ensemble with 26 different equally possible outcomes is 4.7 bits per symbol. If those symbols happened to be the 26 letters of the alphabet, and the language was English, then this dropped to 4.0 bits per symbol (due to the redundancy in the English Language). The implication seems to be that 8 bits for ASCII text is only about 50-60% efficient (forgetting entirely about capitilisation and punctuation for now)

Your statement would imply that 1 bit of information would be enough to tell you (in context of course) which letter out of the 26 possible ones follows any given letter. So for example, given the sentance:
The cat _
you would only need 1 bit of information to tell me which character is going to go next.

I feel somewhat doubtfull......

Although if you are correct, please reply to me because I do have an interest in this topic.

Re:what about IO speed? (1)

bitchazz (134990) | more than 13 years ago | (#436108)

"Data access time is around 100 Mb/sec"

from the university link mentioned above...

Re:The compression algorithm... (1)

ca1v1n (135902) | more than 13 years ago | (#436111)

1) I do know what entropy means. I know that compression and entropy are not explicitly linked, but they are related, and the most efficient algorithms come close to the entropy ratio.

2) I specifically referred to ASCII. You don't need to tell me that a 16-bit representation and an 8-bit representation of the same data will reduce to the same thing when compressed properly.

3) 10:1 is about one bit per byte. If you want to be picky, you could say 0.8 bits per byte. So you heard a different rule of thumb that is a little more specific. That doesn't invalidate the one that's slightly rounded, especially when it happens to correspond exactly to the algorithm in question.

I've been getting crap like this all week. Why do all the trolls now feel they must post correct responses to my posts that nonetheless fail to disprove them, and assert superiority while they're at it? I suppose I might as well ask why my sister likes Britney Spears...

Re:The compression algorithm... (1)

ca1v1n (135902) | more than 13 years ago | (#436112)

Sorry. I've been getting shit like this all week, just all of a sudden. All of it comes from ACs or people with very high slashdot IDs who have only posted one comment in the past few weeks, almost as if they're after me. Yeah, probably just a coincidence, but I'm pissed off nonetheless.

Wrong Card! (1)

TermAnnex (154514) | more than 13 years ago | (#436124)

Imagine taking the wrong card to the grocery store...

The cashier swipes the card and the display starts cycling though all of your porn instead of charging your account.

RIAA tax (1)

AntiNorm (155641) | more than 13 years ago | (#436127)

The company said the system could be produced commercially within two years, and each unit should cost no more than $50 initially, with the price likely to drop later

Will there be a RIAA/MPAA tax on it? 10.8 terabytes is a *lot* of space.

Check in...OK! Check out...OK!

Re:Nonsense (1)

The Dark (159909) | more than 13 years ago | (#436128)

The way I read the few details on the article, the "$50 a unit" means each 1cm^2 (about .25 of a terabyte judging from my credit card). That brings it down to only a 25 fold increase
Still pretty good, but if you take away the 8x compression (for text only - zip gets better than this) you get a 3x increase.
3-fold increase in gigabytes per dollar in two years? Sounds pretty normal, maybe even a bit low.

Re:To quote some guy I can't remember . . . (1)

tbdean (163865) | more than 13 years ago | (#436134)

You know it's not really a *law* right? It is technically possible to break Moore's *law*.
T. Bradley Dean

Re:Always the size of a credit card (1)

LessTalc (164898) | more than 13 years ago | (#436138)

Also, the most important dimension of a credit card is its thinness. When people claim that devices are credit-card sized, they usually mean in only two dimensions. This technology is another example of that - the thing is 10mm thick. That's not sleek enough to carry around in your wallet!

Re:The compression algorithm... (1)

LessTalc (164898) | more than 13 years ago | (#436139)

You shouldn't show it - if you're getting visibly pissed off it's all the more entertaining for the ACs.

Discuss amongst yourselves, here's a topic... (1)

aztektum (170569) | more than 13 years ago | (#436145)

How can Rambus collect royalties off this?

aztek: the ultimate man

Sounds like BS to me.. (1)

fist (178568) | more than 13 years ago | (#436150)

Looks like its just another company with greatly inflated figures trying to get investors...

It's Old News (1)

Alien54 (180860) | more than 13 years ago | (#436151)

I remember seeing this last year, when then said it was three plus years off. So at least it is keeping up with projections.

Here are a couple of stories from The Register on this:

ONE [theregister.co.uk] - UK boffins reckon they can cram 10,800GB of data on a PC card
Monday, 9 October 2000

TWO [theregister.co.uk] - UK boffins unveil $35 '2300GB on a PC Card' RAM breakthrough
Monday, 9 August 1999

Note that the original stories said that the figures were in the thousands of gigabytes - this means TERABYTES

Cavendish Management Resources (CMR [cmruk.com])seems to be an investment company. Keele University [keele.ac.uk] also seems legit, although the Cavendish website seems under the weather.

So it looks like they are making it through the vaporware stage, and approaching the heavy fog stage, before we a watch it materialize.

Bottom line for me is that I do not think I will hold my breath waiting, but I would love it to happen.

It's a Question Of speed.... (1)

icars69 (187201) | more than 13 years ago | (#436157)

... Since there are no numbers in the article, it makes me wonder if this "wonderful" new medium isn't slower than molassas in seibera.
A terabyte is nice.. but not if I can only read it out of a book faster...


Compression (1)

bbay (192854) | more than 13 years ago | (#436158)

Uh... Someone should check CCITT fax encodings for prior art with regard to that compression scheme.

ummm (1)

vectus (193351) | more than 13 years ago | (#436160)

The first invention is a method of compressing text stored in binary form, which expresses information as a series of noughts and ones, by comparing each word with its predecessor and recording only the differences between words. This compresses the data to an eighth of its normal size.

Doing this would slow down any computer by to reasonable of a factor for me to even consider it. Why would I buy a product that will waste my cycles as much as this would, rather than buy a conventional hard drive, which will have comprable space by then, for a similar price? (the $50 price tag WILL grow)

Re:Reminds me of the TCAP.... (1)

nekid_singularity (196486) | more than 13 years ago | (#436163)

That American Computer Company is one of the weirdest sites on the net. It seems like just another computer hardware etailer, until you get to the bizarre claims of having alien technology recovered from Roswell that are soooooooo much better than ours. I always appreciated those sites that kept you guessing as to their veracity, and this is one of the best.

Anyone remember the Trans-Capacitor? (1)

TheNarrator (200498) | more than 13 years ago | (#436165)

Another major storage advance that never materialized.... http://byamerican.com/abouttcap.htm

Not New News (1)

PineHall (206441) | more than 13 years ago | (#436167)

EETimes published this article [eetimes.com] back in 1999, which has a little more detail. Funny there was a theoretical 2 year time period about possible commercial products then too.

Re:To quote some guy I can't remember . . . (1)

fatmantis (218867) | more than 13 years ago | (#436175)

you know, moore's 'law' has nothing to do with magnetic or optical storage densities, right? just transistors. thank you.

Re:The compression algorithm... (1)

GMontag451 (230904) | more than 13 years ago | (#436178)

1) I do know what entropy means. I know that compression and entropy are not explicitly linked, but they are related, and the most efficient algorithms come close to the entropy ratio

No, you obviously don't know what entropy is. In your original post, you claim that in English, there is usually only one bit of entropy for every byte. Then you tried to use this to support a 8:1 compression ratio. This is utter bullshit. An 8:1 compression ratio would mean 7 bits of entropy for every byte. One bit of entropy for every byte would produce at most an 8:7 compression ratio. Entropy is the useless or redundant information. Just like in chemistry, where the term came from. Entropy is all the heat energy that can't be used for work, because it is too random.

Re:ummm (1)

GMontag451 (230904) | more than 13 years ago | (#436179)

Even lossless compression slows down your computer. Did you ever run a "transparent" hard drive compression utility? You know, the kind where all your files are compressed, and whenever you open them it decompresses them on the fly? I'll admit it was quite a while ago that I was using it, but the read/write times were quite a bit slower. Maybe it wouldn't be noticeable nowadays, but it sure was back then.

To quote some guy I can't remember . . . (1)

DeadMeat (TM) (233768) | more than 13 years ago | (#436181)

It's vaporware until I have it in my hands.

(And I'm kind of doubting that's going to happen. I mean, come on, 10 TB for $50 in 2 years? That's a bit ahead of Moore's Law.)

No moving parts (1)

NineNine (235196) | more than 13 years ago | (#436185)

"No moving parts" is the key. Hard drives are the last moving parts in computers, other than cooling fans. Once hard drives are solid state, not only will data stick around for much longer, but all computers will be much more reliable. Hard drive failures are the major reason for most system failures, and because it's the data, they're obviosuly the most catastrophic, too.

Re:Nonsense (1)

ideut (240078) | more than 13 years ago | (#436190)

Ah, so you reckon their "10 TB of data" means "1.25 TB of data, which can represent 10 TB of low-entropy text". That's probably quite likely - and on closer reading, it looks like it is indeed $50 per sqare cm. But this development would still represent an unrealistically radical improvement in *aerial density* of storage.

Re:The compression algorithm... (1)

ideut (240078) | more than 13 years ago | (#436191)

a 16-bit representation would compress almost exactly the same as an 8-bit representation using Lempel-Ziv or derivative techniques

Just to clarify, you mean that a 16-bit representation would compress to almost exactly the same eventual size; you don't mean that the compression ratio would be almost exactly the same.

Just thought it was a bit easy to misinterpret as it stood :-)

This may never happen . . . (1)

DavidBerg (240666) | more than 13 years ago | (#436192)

This reminds me of an old Barney Miller episode when I was younger. An inventor, invented a battery that would last for 10 years at full use. It would revolutionize the disposable battery market. The battery companies wanted the guy killed.

Do you actually think, Seagate, IBM, Quantum/Maxtor, Hitachi, Fujitsu each who has invested billions in hard drives would like to see a product like this come out? Forget the drive companies, imagine the tape companies as well. There would be no need to back up to tape, because the media is so cheap and just replicate it.

Personally, I would love to have one! Just think of all the Pron and l33t warez I could store :)

finally (1)

abcbooze (245097) | more than 13 years ago | (#436194)

I won't have to decide which warez to keep and which to delete...with this type of storage I'll be able to keep it all!

Re:No moving parts (1)

FuegoFuerte (247200) | more than 13 years ago | (#436197)

Hard drives are the last moving parts in computers, other than cooling fans.

Heh... Did you forget about things like cd/dvd-rom drives, floppy drives, zip drives, and tape drives? When these things go bad, it can also cause plenty of data loss. With the rate everything is speeding up and heating up, pretty soon every part of a computer almost will have a cooling fan, a peltier, or both. And if that goes out? (yes peltiers do go out...) Well, then you're screwed. You probably still end up losing data. Also, as people have already mentioned, this vapourware does have moving parts, just not "conventional" moving parts.

$50? (1)

gr8fulnded (254977) | more than 13 years ago | (#436204)

yeah, right. The "drive" itself will only be $50. Who wants to bet the cable to connect it will be around $15,000?


Re:Dont leave home without it.... (1)

Ashleigh (260287) | more than 13 years ago | (#436209)

How much.....
Woops! okay, I missed a part. no need for a ton of replies pointing out my stupidity. I admit it quite willingly.

The Road goes ever on and on,
Down from the door where it began.

sounds great (1)

cdalemx (260874) | more than 13 years ago | (#436210)

when can I hook it up to my DV cam >> store 1000 hours of DV >> don't even want to think about how many mp3's 3 million? or so>> ? yea. . I think its vaporware.> as there are big companys speeding big bucks researching this stuff all the time>> a 100 fold leap seems a bit extream >>

Re:ummm (1)

Arkaein (264614) | more than 13 years ago | (#436211)

Why? The article doesn't go into detail into how complex the codec algorithm is. Most really complicated compression algorithms are lossy, because they try really hard to squeeze a lot of storage from a small amount of error. This is lossless compression, so the algorithm is probably much more straightforward than something like JPEG or MPEG.

Even if it is extremely complicated, I'm sure an inexpensive processor could be packaged with the storage device that could handle all compression/decompression without wasting valuable CPU cycles.

10 tb of storage for $50? (1)

HelpfulPete (308947) | more than 13 years ago | (#436221)

I believe this is real, and you unbelievers will be sorry when the aliens come and suck your brains right through your tin-foil hats...yeah, that's right, the hats don't work anymore!...because I will have downloaded my brain into these modules and hidden it on Atlantis..............

If only I were a byte (1)

PureInsanity (315351) | more than 13 years ago | (#436231)

/me goes into dream sequence.
Byte: Wow, this new storage device is nice and roomy. I don't think it will ever fill up like the last one did.
Byte2: Don't forget what happened last time you said that.
Byte3: Whoa, he's installing something.
Byte: Oh no, its all cramped in here again. What happened? Pr0n? mp3's? whats taking up all the space?
Byte2: He just installed the new version of Windows and Office 2005!
Byte: Not again!
Byte3: Oh no its crashing!!
Byte: Ahhhh

more on creator... (1)

pra9ma (315554) | more than 13 years ago | (#436232)

Professor Ted Williams, Emeritus Professor of Electronic Engineering at Keele University, Staffordshire, England has developed a patented solid state memory system with the capacity of 86 Giga Bytes per square centimetre of surface area. The system uses a magneto-optical system not dissimilar to that of CD-ROM, except that the system is fixed, solid state, and has a different operating approach.

The system has applications for computer and processor memory for credit cards and smart cards, and for high security bank notes, among many other uses.

In computer memory format, the system has a capacity per sq cm in excess of 86 Giga Bytes of re-writeable RAM data - this equates to a memory capacity of 3400 Giga Bytes(3.4 Tb) within the surface area of a credit card! Data access time is around 100 Mb/sec. A single unit with this capacity, but using the computer's processor, has a physical size of about 3 cm x 3 cm x 1.5 cms (high). An additional advantage over existing data storage systems is that only 20% of gross capacity needs to be allocated for error correction, which is significantly less than the 40% for hard disks and 30% for optical storage.

More [keele.ac.uk]

Re:The compression algorithm... (1)

jimmajamma (315624) | more than 13 years ago | (#436236)

I don't mean to slam you, but it bothers me when people post with seemingly limited or academic knowledge of a subject.

I'm no compression expert but I regularly get 10:1 compression on text files using guess what? WinZip.

Maybe as an academic matter, your colleague couldn't achieve better than 4.x:1, but maybe he didn't know everything there is to know about text compression?

Storage space per square cm (2)

Sludge (1234) | more than 13 years ago | (#436241)

Storage space per square cm always make me think of the same thing: What happens when there is a nanoscratch on the surface of my 18 terabytes-in-twelve-centimeter storage medium?

Tighter storage media also needs to safeguard the data on it better. Heaven help us all when we back up all our word processor documents to a tenth of a millimeter and a fly sneezes on it.

Re:This may never happen . . . (2)

KodaK (5477) | more than 13 years ago | (#436242)

Ok, ok, I read "two" as "ten". I guess hooked on phonics didn't work for me.

Reminds me of the TCAP.... (2)

Booker (6173) | more than 13 years ago | (#436243)

Ok, maybe not quite so vaporous, but the first thing that came to mind was the TCAP [byamerican.com]:

American Computer Company readying a new kind of semiconducting device which

rivals the Transistor --the Transcapacitor: a 12-Teraherz Clock Speed Microprocessor
& Storage "Building Block" Component which could Revolutionize Consumer Electronics
and all forms of Computing and communications, by making low cost CPUs and Disk Drives run
as much as 10,000 times faster, consume minute quantities of power and occupy 50 times less space.

All that, and they packaged it in a Pentium II case! :)


Fucking Cookie Spam (2)

FFFish (7567) | more than 13 years ago | (#436244)

Did anyone else notice about a dozen freaking user-tracking cookies were installed by the news website? Several cookies for every damned advertisement, plus more.

Fortunately, I use Opera. It alerts me and lets me block 'em. :-)


Re:New storage ratings... (2)

mrsam (12205) | more than 13 years ago | (#436246)

10.8TB = 1064 DVD's (presuming 10.4GB per DVD)

MPAA must be pissed off.

= 17,400 CD's (presuming 650MB per CD)

So is RIAA.

...I like this technology already.


Copy Protection, copyright, etc.... (2)

weston (16146) | more than 13 years ago | (#436249)

Just a thought here, folks....

I think it might be important that we get copy protection/copyright issues resolved before these new storage technologies arrive.

As more proprietary stuff is produced -- and if it has killer-app serious storage capabilities -- several things will happen:

1) people will realize that they can store all the movies they want to watch and trade on their peer-to-peer networks
2) The media bullies of america will realize this too, and rather than develop a new business model and adapt, will demand draconian restrictions.
3) It will be easier to slip the "protection" mechanisms into the emerging proprietary technologies....

Bottom line: we need to make sure the issue is resolved sooner rather than later.


Mmmm.. Hype! (2)

KFury (19522) | more than 13 years ago | (#436250)

From the article: "Possible applications for the memory include hand-held computers and mobile phones, which require large amounts of memory in a compact form."

Funny, I don't think of PDAs and cellphones as requiring large amounts of memory. My PDA has 2 megs, not 10 terabytes. My phone has about 32K, not 32 trillion K. Yet both seem to do their jobs pretty well...

Besides, cellphones, by definition, have wireless connectivity. What do they need gigs and terrs of storage for?

Kevin Fox

No /conventional/ moving parts, 1 cm /square/... (2)

TheDullBlade (28998) | more than 13 years ago | (#436256)

Sounds like a litho-fab scanning tunneling microscope. Lots of people have been talking about them. It's about time someone talked about production.


British (51765) | more than 13 years ago | (#436258)

Hasn't everyone been fooled so much by goatse.cx they just reroute it to in etc/hosts?

It's a global reference (2)

Gorimek (61128) | more than 13 years ago | (#436261)

What else is all over the planet, and in the same size everywhere?

Re:The compression algorithm... (2)

stu72 (96650) | more than 13 years ago | (#436270)

You're compressing html - html is much more structured and redundant than english.

Yeah, right. (2)

Animats (122034) | more than 13 years ago | (#436271)

Well, first, 8:1 compression of English text isn't that big a deal, especially if the original is 8-bit bytes. Dictionary-based algorithms like LZW (i.e. "zip") often do that well on text.

Using a liquid between the read/write head and the recording surface would help the optical coupling between the surface and head, but creates a whole new set of problems. Probably puts a ceiling on media speed, for example. A whole set of mechanical problems have to be overcome to turn that into a commercial technology. Whether it's worth the trouble remains to be seen. For a 4X improvement in MO drive densities, probably not.

(There's a neat variation on this idea used for scanning photographic film, called a "wet-gate transfer". The film is immersed in a liquid with the same coefficient of refraction as the film base. This makes minor scratches disappear.)

Just More Funding Hype (2)

Gorobei (127755) | more than 13 years ago | (#436275)

$50/10Terabytes = $5/T = .5cents/G. Current tech gives us .5cents/M, doubling every year or so. Commercialization in 2002 means a claim of an eight year technological leapfrogging.

The specific claimns are:

  • Better compression (1/8 on text.)
  • Impressive, but not impossible. This doesn't favor any specific hardware: any tech can use it. So now we are left with 6 years worth of hardware advances.
  • Claim 2: quad density read/writes on mostly conventional media.
  • Huh? No details given. Two year leapfrog from magic (coatings/software unchanged.) 4 years left to account for.
  • Claim 3: 30-fold increase due to new coatings and materials.
  • A five year advance.
  • Claim 4: 10T on a credit card sized device
  • This is an implementation, not an invention. No credit.
Three advances give us -1 years of technological leapfrogging: so the manufacturing process in 2002 should be about twice as expensive as current disk drive fab. All the major storage firms are demostrating lab models with ultra-high bit/cm numbers. Now a minor university team has made major simulataneous advances in compression, r/w density, coatings/materials, packaging, and, above all, commercialization.

Excuse me while I snort beer through my nose.

Oh! Let me get out my wallet to invest! (2)

Chagrin (128939) | more than 13 years ago | (#436276)

Score: -1 Redundant
  • The first invention is a method of compressing text stored in binary form, which expresses information as a series of noughts and ones, by comparing each word with its predecessor and recording only the differences between words. This compresses the data to an eighth of its normal size.

    The second invention involves a different way of recording and reading information, increasing four-fold the amount of data that can be held on magneto-optical disks, which are used for storing computerised data. The third invention provides new kinds of coatings and materials that can be used in disks, providing a 30-fold increase in capacity.

    The fourth and most interesting invention produces a memory system that enables up to 10.8 terabytes of data to be stored in an area the size of a credit card, with no conventionally moving parts.

This is like a total technological troll.

Re:The compression algorithm... (2)

istartedi (132515) | more than 13 years ago | (#436277)

You'r right, gzip isn't that good:

C:\WINDOWS\Desktop>bzip2 -k page.htm

C:\WINDOWS\Desktop>dir page*

Volume in drive C has no label
Volume Serial Number is
Directory of C:\WINDOWS\Desktop

PAGE_F~1 02-13-01 12:17a page_files
PAGE HTM 59,243 02-13-01 12:17a page.htm
PAGEHT~1 BZ2 8,098 02-13-01 12:20a page.htm.bz2
2 file(s) 67,341 bytes
1 dir(s) 3,892.71 MB free


That's a 7.32 ratio.

Re:500:1 compression can easily be achieved on tex (2)

istartedi (132515) | more than 13 years ago | (#436278)

Riot! I love it, but it's not half as good as my multivariate transaxial parser generator. It can recompile the kernel in 0.2 seconds on my 386.

How? (2)

ASMprogrammer (154812) | more than 13 years ago | (#436280)

8-fold compression by only storing the difference between words... could someone tell me how this is possible? Now, I know some amazing compression things have been done (.the .product [theproduct.de]) but this is just text.. I don't understand.

Re:The compression algorithm... (2)

Reality Master 101 (179095) | more than 13 years ago | (#436283)

I remember in a recent Information Theory course I did at Uni, we learnt that the information content of an ensemble with 26 different equally possible outcomes is 4.7 bits per symbol.

That would be a very crude way to compress. LZW compression (and similar algorithms such as the one in gzip) find multiple-byte patterns, which are reduced to smaller and smaller bit representations as they occur more frequently. For example, if I had "ABCABCABCABCABCABCABCABC", it would figure out that "ABC" is being repeated and use a smaller number of bits to represent it.

That's why English text can typically be reduced by 8-10:1 compression, because there is so much redundancy in words. Try doing a gzip on a log-style file with lots of redundancy and you'll often see 100:1 compressions.


Uh oh, Professor Emeritus (2)

Flat5 (207129) | more than 13 years ago | (#436289)

Every university has a few wack job Emeriti running around spewing garbage about something or other. Emeritus means "ok, you can still hang around, but stop bothering us."


floating in lubricants (2)

fatmantis (218867) | more than 13 years ago | (#436291)

" . . . 10.8 terabytes of data to be stored in an area the size of a credit card, with no conventionally moving parts... ...Each square centimetre of this memory system is a closed unit containing a metal oxide material on which data are recorded, and a reader made of a fibre optic tip suspended above the material in a lubricant."

notice the language: no conventionally moving parts... plenty of unconventional movement, though. ;|

Which brings me to my point: how can this invention be aimed at the mobile/palm markets if the read head is floating in lubricants?! here's to hoping they license some skip/shock technology from the walkman crowd...

Some questions I have about this (2)

FlashfireUVA (315550) | more than 13 years ago | (#436292)

The first invention is a method of compressing text stored in binary form ... by comparing each word with its predecessor and recording only the differences between words. This compresses the data to an eighth of its normal size.

Really? Just working on the above quote, I do not see much in the way of compression, especially 1/8th in size. It might work for a dictionary, but actual useful text is going to be less similar.

Another question I have is this actually REWRITABLE? I mean, I am reading this and they talk about recording and reading. However, is this write-once/read-many technology (in which case, it would be useful for technical reference)? OR is it write-many/read-many, in which I can upgrade my hard drive to 250x its current size for $50? I suspect it is the former, in which case, it is a nice idea but not as useful at first glance.

Even if it is only write-once, the ability to have 10 terabytes for storage in say a cell phone (even if I cannot reuse the data space) is still impressive.

what would one do with all that space? There isn't enough porn or music to actually download ... oh wait nevermind.

Re:This may never happen . . . (3)

KodaK (5477) | more than 13 years ago | (#436293)

Personally, I would love to have one! Just think of all the Pron and l33t warez I could store :)

Oh, get real. Both you and I know that by the time this technology (if it's real) makes it to market a standard OS install (take your pick, it won't matter) will be 5TB, using up half of it right off the bat. I, for one, will not be looking forward to buying Linux Kernel Internals -- 33rd printing, volumes 1-53.

And, in ten years, I'll STILL be on a fucking 56k-when-hell-freezes-over-more-like-26.4 dialup while Suzy N'Syncempeethrees and Sammy Likestoforwardjokes III have blistering Ultra-DSL at 30Gbps. Grrr.

Sorry for the rant.

Re:The compression algorithm... (3)

Goonie (8651) | more than 13 years ago | (#436295)

I'm not at all surprised that they can get 8:1 compression of plain text.
I am. One of my postgrad colleagues (back when I was a postgrad) did research into text compression. The best that he could get on the KJV Bible was a little over 1.8 bits per character (about 4.4:1 compression), and IIRC the best *anyone* has ever done with a general-purpose compression scheme is a bit over 1.7, and it turns out that the bible is generally a little more compressible than most other text ;) Generally, you'll struggle to get better than 4:1 on most text, and that's using using compressors that are substantially slower than gzip or even bzip2.

While it is correct that studies with humans have indicated that English text has about one bit of entropy per byte, suggesting a natural limit of about 8:1 compression, humans have the use of a whole lot of semantic information (they understand the meaning of the text and can therefore predict words based on that) that no compression algorithm I'm aware of has used.

I'm taking this with a large grain of salt, thanks.

Re:Nonsense (3)

lizrd (69275) | more than 13 years ago | (#436297)

I think that "no conventionally moving parts" means that they are using a Wankel rotary engine to move the parts rather than the conventional 4-stroke design. I must admit that it is a pretty clever hack to figure out how to use mechanical storage to get that kind of density and even more curious that they chose the rather oddball Wankel design over battery power which is usually used for small devices.

The compression algorithm... (3)

ca1v1n (135902) | more than 13 years ago | (#436298)

I'm not at all surprised that they can get 8:1 compression of plain text. It is a rule of thumb for encryption that plain text, at least in English and with ASCII, has only about one bit of entropy per byte. While it is impressive that they've managed to get rid of almost all of the slack, it doesn't strike me as that hard to believe.

Re:hmmm (3)

nomadic (141991) | more than 13 years ago | (#436299)

If you don't want to read about vaporware, then you should probably read buy.com instead of slashdot.org.

More Info (3)

PineHall (206441) | more than 13 years ago | (#436300)

Here is some more info I got from Google's cache for http://www.cmruk.com/cmrKHD.html


UPDATE - November 2000 During 1999 Keele High Density Ltd. (KHD) announced that it had developed a very high density memory system capable of holding 2.3TB of memory in the space of a credit card. Further work since then has resulted in some significant upward changes to both the capacities previously stated and to the applications the KHD technology addresses. Some of this work is continuing, and there are further patent applications to be filed. The information available publicly is necessarily restricted until those patents have been filed. The very high data densities are achieved through a combination of many different factors - some relating to the physical properties of the recording media, and some to the way of processing and handling data. The physical memory system is a hybrid combination of magneto-optics and silicon. The KHD memory system is applicable to both rotating and fixed media, and is not dependent on the laser-based media-addressing system used. Following the work undertaken since last year, the following data capacities are achievable: a) For rotating media, at DVD size, a single-sided capacity of 245 GB using a red laser. b) For fixed media, a single-sided capacity of 45 GB/cm, giving a total capacity of 3.6 TB on the surface area of a credit card, double-sided and using a red laser. Using a violet laser (now being introduced), the capacity at credit card size will be 10.8 TB. In last year's announcement from KHD the primary focus was on the fixed media application, which with a novel form of laser addressing, could be described as 'near solid state' - involving no moving parts in the conventional sense. However, this aspect of the technology will require some further R&D work to bring it to a mass-production scale - although it is believed that this will not present insurmountable difficulties. These constraints do not apply to existing rotating media applications (for example, DVD), using conventional laser systems, and there are no reasons why the KHD technology cannot be implemented within a short timescale - measured literally in months. A major development arising out of KHD's work over recent months, is that the technology achieving these very high data density figures has application not just for memory systems, but will also produce significant enhancements for the transmission and processing of data generally. This means that KHD's technology can achieve an effective increase in bandwidth capacity, because the very high data density properties, which are in addition to those from conventional compression methods, allow so much more data to be transmitted over a given bandwidth. The same advantages are also felt in terms of processing speeds. Work on this aspect of KHD's technology is continuing, but the current calculations show that an effective eight-times increase in bandwidth capacity and processor speed can be achieved. KHD's development represents a fundamental advance in computing technology, with the benefits being felt across many industry areas. Following completion of the patenting position, KHD will be looking to license the technology to companies for mass-production, and for the ongoing R&D work needed to make the 'solid-state' memory commercially viable. The technology has been developed by Professor Ted Williams at Keele University, Staffordshire, England, over a period of thirteen years. PROFILE: Ted Williams is Professor Emeritus of Optoelectronics at Keele University, Staffs, England, and visiting Professor of Electronic Engineering at South Bank University, London. Professor Williams was Director of Research with Sir Godfrey Hounsfield, Nobel Prizewinner, working on the invention and creation of the first NMR Scanner at Hammersmith Hospital, London. He has also held directorships with major international companies. His main focus over the last thirteen years has been the research and development of 3-dimensional magneto-optical recording systems. KHD's licensing and funding arrangements are managed by Mike Downey, Managing Director of Cavendish Management Resources. CMR is a venture capital and executive management company, based in London. CMR has supported the development of this technology. Further information from: Mike Downey Managing Director CMR, 31 Harley Street, London W1N 1DA Tel: +44-(0)20-7636-1744 Fax: +44-(0)20-7636-5639 Email: cmr@cmruk.com [mailto] Web: www.cmruk.com [cmruk.com]

a bit light on the details (4)

freq (15128) | more than 13 years ago | (#436302)

This article is pure crap. Professor soggybottoms invents ten fabulous new technologies that will instantaneously revolutionize the entire computer industry, all while fixing himself a ham sandwich...

film at eleven...

New storage ratings... (4)

mduell (72367) | more than 13 years ago | (#436303)

10.8TB = 1064 DVD's (presuming 10.4GB per DVD) = 17,400 CD's (presuming 650MB per CD) = 7,864,320 floppies (presuming 1.44MB per floppy) = 371,085,174,374 of those new MOT 256bit MRAM chips.

Anyone want to come up with some other ratings ?

Mark Duell

Nonsense (4)

ideut (240078) | more than 13 years ago | (#436305)

This is a highly unconvincing attempt at hyping what is in all likelihood a non-existant product.
The first invention is a method of compressing text stored in binary form, which expresses information as a series of noughts and ones, by comparing each word with its predecessor and recording only the differences between words

Well that's pretty unremarkable. They've written a compression algorithm.

Oh, by the way, they have also invented

"a memory system that enables up to 10.8 terabytes of data to be stored in an area the size of a credit card, with no conventionally moving parts"

If that were true, why are they bothering to even *think* about their text compression algorithm? Fifty dollars a go? Who wants compression? If these people are telling the truth, we are talking about a thousand-fold increase in gigabytes per dollar over the space of two years.

The phrase "no conventionally moving parts" also brings to mind images of really whacky, non-linear moving parts flailing about. What the hell do they mean?

Absolutely no technical detail is given in the article, and as far as I'm concerned, this is yet another false alarm on the long road to entirely solid-state computer systems.

Always the size of a credit card (5)

HomerJ (11142) | more than 13 years ago | (#436306)

Why is it every piece of new tech is the size a a credit card? Can't be the size of a dollar bill? or what about a piece of sliced bread, considering all this new tech is the greatest thing since.

I just want to know what every tech inventor's opbession is with everything being the size of a credit card. It's not like we are going to fit these in our wallets. "Sure Mr. Tanaka, I have my 20 terabyte database here in my wallet, care to swap?"

I dunno, I just wish technology came in different sizes I guess.

Wow this is GREAT! (5)

joshv (13017) | more than 13 years ago | (#436307)

Man, I am so glad that I read slashdot. Without slashdot I would have to sift through tons and tons of bullshit every day just to find the new and amazing technological advances of the age. But no, I read slashdot, so I can come here and find the best of the best, such as this dandy invention.

Wow 10.8 TB on a credit card, wahooo! What will they think of next? How do I send them guys my money? I couldn't find any address or nothing, but those english 'blokes' sure look like they is gunna go far with this invention - specially that text compression thingy - pretty damned original if I do say so myself. And then that storage mechanism 'no conventional moving parts' - I can't imagine how they got those conventional parts to stop movin, sound like quite a trick.

Anyway, don't you slashdot guys let the criticism get you down. I am with you. Don't listen to them naddering nabobs of negativism. They always persecute the dreamers!

I am looking forward to your next 'Light speed limit possibily violated' post with anticipation.


Re:Always the size of a credit card (5)

Coward, Anonymous (55185) | more than 13 years ago | (#436308)

what about a piece of sliced bread

The size of bread slices varies widely from region to region, this prevents multinational corporations from referring to their products as the size of a piece of sliced bread. Although ANSI created a sliced bread standard in 1986 and updated their standard in 1992 to account for the coarseness of pumpernickel, this is an American standard which prevents any companies wishing to sell their product outside of the United States from using it and unfortunately the ISO has been dragging their heels on forming a sliced bread standard, so until the day when we get the ISO sliced bread standard you can expect many more credit card sized comparisons.

Re:The compression algorithm... (5)

stu72 (96650) | more than 13 years ago | (#436309)

All right,

lynx http://slashdot.org/article.pl?sid=01/02/13/024025 4&mode=nested&threshold=-1 > slash.txt

(no -source option because this is Slashdot, and as we all know too well, the content is much more redundant than repeating html tags, much, much more redundant)

shelf:~$ ls -l slash.*
-rw-r--r-- 1 stu users 20394 Feb 12 21:09 slash.bz2
-rw-r--r-- 1 stu users 23750 Feb 12 21:09 slash.gz
-rw-r--r-- 1 stu users 93867 Feb 12 21:09 slash.txt

This gives a ratio of 0.22. Surprisingly, if you feed the same page to bzip2, but at +2, the ratio increases to 0.27, implying that there is more entropy and thus, more information, in higher scoring posts, which of course, we know to be false :)

Perhaps with this firm mathematical footing, /. can proceed to a new chapter in moderation - moderation by bzip2. Articles which receive high compression ratios are marked down automatically. Of course, this would make it possible to earn a lot of karma, simply by posting random garbage. oh wait..

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account