Slashdot: News for Nerds


Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Storing CERN's Search for God (Particles)

Zonk posted about 7 years ago | from the she-is-in-the-details-or-so-i'm-told dept.


Chris Lindquist writes "Think your storage headaches are big? When it goes live in 2008, CERN's ALICE experiment will use 500 optical fiber links to feed particle collision data to hundreds of PCs at a rate of 1GB/second, every second, for a month. 'During this one month, we need a huge disk buffer,' says Pierre Vande Vyvre, CERN's project leader for data acquisition. One might call that an understatement.'s story has more details about the project and the SAN tasked with catching the flood of data."

cancel ×


News for Nerds! (4, Insightful)

KlomDark (6370) | about 7 years ago | (#19935573)

Wow! Actually geeky science news, not enough of that here lately!

Um's a product placement for Quantum (4, Informative)

xxxJonBoyxxx (565205) | about 7 years ago | (#19935629) Actually, it's a product placement PR piece about Quantum's StorNext. (Read page 2...)

Re:Um's a product placement for Quantum (5, Funny)

Anonymous Coward | about 7 years ago | (#19935905) Actually, it's a product placement PR piece about Quantum's StorNext. (Read page 2...)
We knew there were some serious nerds on Slashdot, but to be potential customers for the same RAID system as CERN, whoa! :)

Re:Um's a product placement for Quantum (0)

Anonymous Coward | about 7 years ago | (#19936829)

We knew there were some serious nerds on Slashdot, but to be potential customers for the same RAID system as CERN, whoa! :)

1 GB/second would be quite a lot of porn.

God Particles (2, Funny)

pipingguy (566974) | about 7 years ago | (#19936491)

According to a guy that I met yesterday on the street (he was talking to himself or somebody) the only way I could meet God (and hopefully His particles) was through his son. WTF? Can't even *God* get a good secretary these days?

Re:News for Nerds! (3, Interesting)

zeugma-amp (139862) | about 7 years ago | (#19935641)

Interesting article.

Many years ago when the SSC (Superconducting Super Collider) was still being built in Texas, I went to an HP users group meeting as I was working primarily with HP-3000 systems at the time. The fellow addressing the meeting was the head of the physics department at the SSC. It was a really neat presentation, in which he described a similar, though orders of magnitude smaller data storage requirement, though he was talking terabytes of data per month IIRC. At the time, they were planning on using two arrays of 40 workstation computers to handle the load. This would have been fairly early loosely coupled setup similar to a Beowulf cluster.

After the presentation I went up to him and told him that all I wanted to do is sell him mag-tapes.

These types of experiments evidently produce tons of data. I wonder if the processing could be parcelled out like Stanford's Folding@Home or SETI to speed up data correlations.

Re:News for Nerds! (3, Insightful)

Anonymous Coward | about 7 years ago | (#19935659)

ive often wondered if i could sneak into cern and just look around. i think the only two things you would need to do it would be a white lab coat and a really grizzled look on your face.

i remember when i was under 18 i used to go to alot of places i wasnt allowed in just to check things out. i wasnt a malicious kid that would run around breaking things for fun, i just loved seeing various things that most people never see or think about, especially feats of engineering.

when i turned 18 i looked back and was actually sad i didnt do it more often. after 18 you dont just get escorted out with a warning. now that im older im really really sad for the upcoming generations. genuinely good kids wont go peeking around at stuff as often, and the ones that do will be severely punished because everyone will think they were 'terrorists'.

for many of the up and coming kids, all they have to look forward to are pointless unnecessary techno gadgets and the warped MTV social culture where money, drugs, and sex are all they are taught to appreciate and strive for.

Re:News for Nerds! (5, Interesting)

Rodolpho Zatanas (986694) | about 7 years ago | (#19935855)

From my experience, generic blue work clothes (preferably with your name on the breast pocket) work best. I once got into some research facility (they had lasers and everything) because I got out of the elevator on the wrong floor and some guy in a lab coat opened the door for me (I was wearing my work clothes because I was on my lunch break). I wandered about at the place for something like 10 minutes before I found a way out. There was even a security guy of some type sitting at a hallway but he lost interest in me after I looked him in the eye and said hello.

Re:News for Nerds! (4, Informative)

xyvimur (268026) | about 7 years ago | (#19935971)

Just go there and take a guided tour. If you'll hurry you'll be able to go to the detector pit and see it. Otherwise after starting up it will be inaccesible for visitors for the life-cycle of the experiments (10-20 years). Google for CERN visit service.

Milosz []

CERN: been there as teenager (0)

Anonymous Coward | about 7 years ago | (#19936545)

About 15 years ago, I was around 16, we made a one week school-trip to Geneva and we also visited CERN for one day. Even if you don't understand anything of what they are doing there, the place is impressive. I was very surprised that you actually can visit such a facility. I bet there are similar labs/institutions near you happily showing around and showing off what they do :)

Re:News for Nerds! (2, Interesting)

Anonymous Coward | about 7 years ago | (#19936583)

I managed to see this at Easter. It's huge. I've posted some photos at: [] []. The last shows one of the rooms of computers they're using. The others are just views of the huge detector. It's in a man-made canvern 100 metres tall and 100 metres wide, all below ground! All just taken on the visitors tour.

Re:News for Nerds! (1)

lixee (863589) | about 7 years ago | (#19936841)

I called them earlier this week to schedule a visit. They're booked until January 2008!

Re:News for Nerds! (2, Interesting)

FractalZone (950570) | about 7 years ago | (#19935831)

I'm not so sure about the "huge disk buffer". Smaller disks can be spun faster and tend to have lower latency. I'd like to see the drum drive make a comeback for disk cache...expensive, but fast!

Re:News for Nerds! (2, Funny)

uolamer (957159) | about 7 years ago | (#19936291) /~~--allah-was-here--~~/0day/

Re:News for Nerds! (0, Redundant)

GooberToo (74388) | about 7 years ago | (#19937249)

It's not real geeky science news until they tell us how many library of congresses it is per second.

PC's? (1)

deopmix (965178) | about 7 years ago | (#19935579)

I don't precisely think that CERN is going to be purchasing thousands of dell PCs to analyze the data that they collect. maybe they are talking about a distributed computing project?

Re:PC's? (1, Redundant)

SnoopJeDi (859765) | about 7 years ago | (#19935589)

From TFA:

The ALICE experiment grabs its data from 500 optical fiber links and feeds data about the collisions to 200 PCs, which start to piece the many snippets of data together into a more coherent picture. Next, the data travels to another 50 PCs that do more work putting the picture together, then record the data to disk near the experiment site, which is about 10 miles away from the data center.

Not Informative (1)

x_MeRLiN_x (935994) | about 7 years ago | (#19935793)

That refers to the number of PCs involved in storing the data.

Re:PC's? (2, Funny)

Anonymous Coward | about 7 years ago | (#19935637)

Actually their plan is to store all that data on Commodore 64 cassette tapes.

Re:PC's? (2, Funny)

1310nm (687270) | about 7 years ago | (#19935735)

Ok, who put California Games x 10000000000000000000000000000000000000000000000000 0 on the tapes?

Re:PC's? (5, Informative)

Rodolpho Zatanas (986694) | about 7 years ago | (#19936067)

load"*",8,1 would load something from a diskette, not a cassette.

Re:PC's? (0)

Anonymous Coward | about 7 years ago | (#19936367)

Good... but can someone please explain why I have a key labelled "RUN/STOP", and how exactly I am supposed to use it ?!??

Re:PC's? (1)

Rodolpho Zatanas (986694) | about 7 years ago | (#19936559)

If you want to load the first programme on a cassette tape, just push . Also, is sort of like on an IBM PC or compatible. Those are the most common uses. Of course, you already knew that, didn't you?

Re:PC's? (1)

Rodolpho Zatanas (986694) | about 7 years ago | (#19936565)

This is what you get for not reviewing... "SHIFT+RUN/STOP" is what's missing from the first sentence. "RUN/STOP+RESTORE" goes between "Also," and "is sort of". Sorry.

Re:PC's? (1)

pipingguy (566974) | about 7 years ago | (#19936519)

What, no Sig?

Re:PC's? (1)

gardyloo (512791) | about 7 years ago | (#19936215)

No way. Fisher Price spikey plastic records.

Re:PC's? (2, Informative)

Falstius (963333) | about 7 years ago | (#19935731)

Actually, there really is a gigantic room at CERN full of commodity PCs that form the first level of computing for the different experiments. The data is then shipped off to sites around the world for further processing. There is a combination of 'locally' distributed computing and world-wide grid being used.

Re:PC's? (1, Informative)

Anonymous Coward | about 7 years ago | (#19935909)

Initially some data is being filtered at the detector pits by the farms of PCs doing the triggering. After that the data will be fed to storage and analysis. CERN has been upgrading its computer centre for quite some while (the main problem is not power supply, but cooling system - thus some of performance benchmarks also include it). Besides CERN (Tier-0) will have high-speed connections (via means of LCG backbone) with many sites around the world and the data processing will be done in a 'global manner'.

You can google on phrase 'service challenge', or just go to the LCG [] site.

Milosz []

If Only... (4, Funny)

i_ate_god (899684) | about 7 years ago | (#19935613)

If only I could get porn that fast

there I said it, let's move on now.

Decoded Transcripts from experiment: (1)

sm4096 (1104499) | about 7 years ago | (#19936659)

1>Oh God!
1>Oh God!
2>Oh God thats is awesome. more!!!

3>Hey wait, you guys are studying the wrong kind of collisions.
1>Sorry just stress testing the hard drives.
2>Yeah we couldn't help it, the vibrations of so many drives...

The mere thought of that much bandwidth... (0, Offtopic)

Khyber (864651) | about 7 years ago | (#19935621)

I think I just creamed myself. The hardware needed to push that much data must be insane!

Re:The mere thought of that much bandwidth... (3, Interesting)

dosguru (218210) | about 7 years ago | (#19935663)

A standared dual CPU dual core HP server with Windows can keep a 4Gb FC pretty full if set up correctly. I work for a large bank, and we have many a Solaris box that can keep 4 or even 8 2Gb FC cards full into our FC and SATA disk arrays. Not to trivialize the extreme coolness of what they are doing at all, but a PB of data with a few PB of I/O in a day isn't what it used to be. I'm just glad to see they don't use Polyserve, it is worthless for clustering and has caused more downtime at work than it has ever prevented. If they really have that much data they should use 10Gb FC or Infiband. Even our stodgy old bank is implementing our first infiband system so we can move IO at 12Gb instead of the slow 4Gb links.

Too many video games may stunt your growth. (0, Offtopic)

Futurepower(R) (558542) | about 7 years ago | (#19935651)

Quote from the Slashdot story, as it is now: "... and the SAN tasked with catching the flood of data."

I think the correct word, considering the meaning, is "caching".

"Don't run with scissors" advice: If you play video games too much, it will stunt your growth. People need time to learn about the real world around them, not just a fantasy world. Part of learning about the real world is learning how to communicate with other people.

A correct use of the word "catch". (4, Insightful)

Futurepower(R) (558542) | about 7 years ago | (#19935929)

Not only did the Slashdot editor not catch a spelling mistake, he apparently didn't catch the fact that the linked article is an advertisement from CXO Media [] , which, according to its web site, mixes articles and advertisements: "Through our integrated media and marketing programs we provide..."

From the linked article [] : "... the team is using Quantum's StorNext software as its file system..."

Question: Did a Slashdot editor get paid directly for running an advertisement disguised as an article? Or was someone in Slashdot's parent company paid "under the table"? Or did the parent company get paid?

Anyone wanting to read a real article from 2005 about CERN's data handling, data storage, and data processing can download this PDF file: Grid Computing: The European Data Grid Project [] .

Real articles begin this way: "The computing challenges for LHC are: * the massive computational capacity required for analysis of the data and * the volume of data to be processed."

Advertisements begin by talking about God and murder, this way (from the article linked by Slashdot): "CERN's Search for God (Particles)..."

and "Maybe you last read about CERN (the European Organization for Nuclear Research) and its massive particle accelerators in Angels & Demons by Dan Brown of The Da Vinci Code fame. In that book, the lead character travels to the cavernous research institute on the border of France and Switzerland to help investigate a murder."

Re:A correct use of the word "catch". (2, Interesting)

Anonymous Coward | about 7 years ago | (#19936205)

Absolutely correct. (I didn't read the article - i work with the Grid [LCG])

Just two points which may seem to ignore:

Firstly, the Data is of no use if it just sits on some tape/disk drives at cern, because it has to be analyzed as well if you actually want to find something. Back when the whole thing started, it was deemed to expensive to build a central analysis facility at CERN, so the LHC Community Grid was created, some ~100 datacenters around the world with lots (>20k) of CPUs and lots of diskspace. The Data from CERN is automatically distributed over high-speed links to the main site in every "cloud" (called Tier 1, for example Karlsruhe in Germany) and then from there to the smaller centers. Then, if a physicist sends an analysis job, it finds its way to the site where the data is and works there, so there is no unnecessary copying.

Secondly, in addition to the real data coming out of the detector physicists need also quite a lot of simulated "Monte-Carlo" data. The production and storage of that has already been going on for some time, and is already taking up some millions of Gigabytes.

By the way, the data storage management system preferred by a lot of the lhc guys is called D-Cache ( ), developed at DESY in Hamburg and free for non-commercial use (this is only for you if you have lots of disks. and preferably a tape robot as a backend.)

Re:Too many video games may stunt your growth. (2, Insightful)

OverlordQ (264228) | about 7 years ago | (#19936015)

I think the correct word, considering the meaning, is "caching".

No, I believe the word was catching. As in:
They're throwing all this data at me and I gotta catch it.

"Catching a flood"? (1)

Futurepower(R) (558542) | about 7 years ago | (#19936161)

I thought about that, but when was the last time you heard someone talk about "catching a flood"?

Re:"Catching a flood"? (1)

JamesTRexx (675890) | about 7 years ago | (#19936357)

I think it was some 2000+ years ago, some guy with an animal fetish. I believe his name was Noah.

Re:"Catching a flood"? (1)

Ultra64 (318705) | about 7 years ago | (#19936507)

When was the last time you heard someone talk about "caching a flood"?

Striped FS (1, Interesting)

Anonymous Coward | about 7 years ago | (#19935661)

They're probably using an object based parallel filesystem like Lustre or something similar. I heard at At Sun they build these all the time with one customer striping data against 214 PCs acting as data engines all within one Lustre Filesystem. All the storage is direct attach but SAN can't even come close to the speeds generated and all the equipment being used is commodity hardware.

Re:Striped FS (1)

BoberFett (127537) | about 7 years ago | (#19935743)

Even so, this is a big project. If they're projecting 1GB/sec for a month, even if they use the latest massive hard drives (1TB) they'll still need around 3000 of them. Presumably they won't need them all online full time, I'd imagine they'll use some sort of hot swapping. Still, that's a lot of data.

Gigabits or Bytes? (1)

Easy2RememberNick (179395) | about 7 years ago | (#19935679)

2,629,743 seconds in a month, so... 2,629,743 GB or 328,717 GB?

  It's too late to do math.

Re:Gigabits or Bytes? (2, Informative)

snowraver1 (1052510) | about 7 years ago | (#19935717)

2.6 Petabytes. The article says that they will be collecting petabytes of data. Also, the article clearly said GB. GB= Gigabyte Gb= Gigabit. The thing that I thought was "Wow that's ALOT of blinking lights!" Sweet!

Re:Gigabits or Bytes? (1)

Easy2RememberNick (179395) | about 7 years ago | (#19935745)

Yeah that's why I was confused they had a big B but if you're talking network speed it's usually described in Gigabits, small b.

"In total, the four experiments will generate petabytes of data."

  Divide at least 1 PB by four and you get 256 TB, I was close with 328 TB, so it must be Gigabits.

30 racks, $1.8M in disks (3, Informative)

this great guy (922511) | about 7 years ago | (#19936089)

Assuming a non-RAID 3x-replication tech solution (what Google do in their datacenters), using 500-GB disks (best $/GB ratio), they would need about 16 thousands disks:

.001 (TB/sec) * 3600*24*30 (sec/month) * 3 (copies) * 2 (disk/TB) = 15552 disks

Which would cost about $1.8M (disks alone):

15552 (disk) * 110 ($/disk) = $1710720

Packed in high-density chassis (48 disks in 4U, or 12 disks per rack unit), they could store this amount of data in about 30 racks:

15552 (disk) / 12 (disk/rack unit) / 42 (rack unit/rack) = 30.9 racks

Now for various reasons (vendors influence, inexperienced consultants, my experience in the IT world in general, etc), I have a feeling they are going to end up with a solution unnecessarily complex, much more expensive, and hard to maintain and expand... Damn, I would love to be this project leader !

Re:Gigabits or Bytes? (2, Funny)

Anonymous Coward | about 7 years ago | (#19936207)

"2,629,743 seconds in a month, so... 2,629,743 GB or 328,717 GB?"

If they were smart, they'd choose February. They could save ~172800 seconds and therefore some disk space!

Pfft... (0)

Anonymous Coward | about 7 years ago | (#19935701)

You think that's bad? I've gotta download 2 Grateful Dead torrents for, like, 3 months from Lossless Legs! I scoff at your God (particles)!

Re:Pfft... (0)

Anonymous Coward | about 7 years ago | (#19935765)

Dammit, I meant 2 torrents per day. Too late, think, caffeine, etc.

A rough calculation on disk size (2, Interesting)

mad zambian (816201) | about 7 years ago | (#19935709)

based on 1GB/sec * ((3600 * 24) * 31) means over 2.5 Petabytes.
Something like 3000 of the current ITB drives.
How long until Exabyte level storage is required for some project or another?

Re:A rough calculation on disk size (1)

BlueCollarCamel (884092) | about 7 years ago | (#19935893)

Thursday, maybe Friday. Depends on the weather.

Re:A rough calculation on disk size (1)

Regolith (322916) | about 7 years ago | (#19936317)

Tsk, tsk... you forgot about redundancy.

Re:A rough calculation on disk size (2, Interesting)

VisionMaster.NL (1131151) | about 7 years ago | (#19937269)

Estimates are that the four LHC experiments will produce about 15 PetaByte/year. The LHC will be online for about 15 years (maybe more). All data is kept permenantly. This means that there is a fail-safe copy stored at CERN on tape, which is a big task to perform constently. But that data is not worked on there, it is spread through the huge tubes of the academic fibers to big data centers around the world. All that online copy is replicated and is stored at two geographical locations. At each location most of the data (depends on the type) is mirrored to tape. So the largest volumes is on tape but there is still a need for mucho-grande cache servers, which are mostly huge disk-arrays. The 10-11 biggest data centers will store and perform (re-)processing of the data at the rate in which it is produced. The other 190 data centers are calculating the physics analyses of all the (local) science groups. ps: Most data is analyses/processed multiple times.

Pseudo-Dupe? (1)

DTemp (1086779) | about 7 years ago | (#19935713)

Re:Pseudo-Dupe? (3, Funny)

Easy2RememberNick (179395) | about 7 years ago | (#19935757)

Nah it's just, spooky article submission at a distance.

  The other article appeared because it knew this one would be submitted later in the future.

Searching for God (1)

Cassini2 (956052) | about 7 years ago | (#19935751)

These physicists always say they are searching for God or "the God Particle". But what happens if they switch the big God Particle generator on, and God suddenly appears? What if we really do find God?

What are all these geeky physicists going to do then? Do we really want to find God? Do we really want physicists finding God? Is this a good thing?

Just wondering ...

Re:Searching for God (1)

Easy2RememberNick (179395) | about 7 years ago | (#19935767)

Considering the Creationist versus Science debate that would be be quite a hoot! Irony at it's best, Science discovers God.

Re:Searching for God (1)

Ai Olor-Wile (997427) | about 7 years ago | (#19935789)

What percentage of you is Dan Brown, and how can we extract the other parts? o_O

Re:Searching for God (1)

Svenne (117693) | about 7 years ago | (#19935791)

Well, they could find pink invisible unicorns as well. What then?

Re:Searching for God (1)

StarfishOne (756076) | about 7 years ago | (#19936729)

OMG! Ponies!! :D

Re:Searching for God (2, Funny)

ammonynous (1119371) | about 7 years ago | (#19935811)

With a God Particle generator, wouldn't you *generate* God? Wouldn't that be a hoot?!?

Re:Searching for God (1)

Umbral Blot (737704) | about 7 years ago | (#19935887)

Im going to assume, for the sake of charity, that you are being facetious, and know that what is called the "god particle" has nothing whatsoever to do with god as the word is usually understood (as an invisible sky wizard). The higgs particle is only called the god particle as a joke by physicists to emphasize how awesome finding it would be. Once upon a time I was bothered by that, because I thought it was confusing for no good reason, but now I see it as a kind of intelligence test.

Re:Searching for God (2, Funny)

edsyc (1088833) | about 7 years ago | (#19935937)

The physicists don't really want to find god, it's just the only way they can get research funding under the bush administration.

Re:Searching for God (1)

Saikik (1018772) | about 7 years ago | (#19935957)

I think this whole thing is a farce. These scientists should spend time on more interesting problems, even looking for aliens seems less ridicules than this. If God wants to be seen/found, he would be. If he doesn't want to be seen/found, he won't be. If we find something that is hiding, then by the very definition of finding it, that thing we find can't be God. That thing becomes fallible, it is something else.

Re:Searching for God (1)

damiam (409504) | about 7 years ago | (#19936025)

Dude, just STFU if you don't understand the nature of the research. No one's trying to find God here.

Re:Searching for God (1)

JMZorko (150414) | about 7 years ago | (#19936257)

* sigh *

When society finally moves past this silly "god" idea, as it has with a lot of other silly superstitions, we will all be _so_ much better off. I really don't mean this in a disrespectful way (i'm perfectly willing to be your friend, even if we disagree on the "god" thing, and I can respect people for a lot of reasons, even if we differ on matters of religion), but it's really what I think / feel. It's done waaaaay too much harm, and the sooner we jettison it, the better.


John, your friendly neighborhood happy-go-lucky heathen godless atheist :-)

Re:Searching for God (0)

Anonymous Coward | about 7 years ago | (#19936359)

This isn't about finding god, it is about finding the force that gives everything mass. This is EXTREMELY important work, it isn't religious mumbo jumbo.

Re:Searching for God (1)

jdh41 (865085) | about 7 years ago | (#19936373)

Dissection. Questioning. Years of resulting research.

Possibly ask him for more funding.

Re:Searching for God (1)

Cassini2 (956052) | about 7 years ago | (#19937117)

Thanks to all the posters for the witty responses.

Finding God (3, Funny)

Mark_MF-WN (678030) | about 7 years ago | (#19936511)

Don't worry -- the products of particle accelerators only exist for a few picoseconds. If God is created during a collision event, he will wink out of existence so fast that we'll only become aware of his presence by the shower of Mormonions and PatRobertsonite particles impinging on the detection apparatus.

FTL (3, Funny)

unchiujar (1030510) | about 7 years ago | (#19935771)

"Due for operation in May 2008, the LHC is a 27-kilometer-long device designed to accelerate subatomic particles to ridiculous speeds, smash them into each other and then record the results."
Next up ludicrous speed [] !!! Better fasten your seat belts...

Re:FTL (1)

Roger W Moore (538166) | about 7 years ago | (#19936397)

Due for operation in May 2008, the LHC is a 27-kilometre-long device designed to accelerate subatomic particles to ridiculous speeds

Actually it would be better to say "ridiculous energies" because the speed of the protons in the LHC will barely be any faster than those in the Tevatron...but the energy is seven times larger thanks to relativity.

Re:FTL (1)

Eravnrekaree (467752) | about 7 years ago | (#19937077)

Are there any practical applications of this research in technology? And what will this research tell us about the universe?

Thousands of disk drives. (3, Funny)

Anonymous Coward | about 7 years ago | (#19935779)

Hmm, lets see. ~2700 TB of data over one month. Let's store it on 500 GB drives. That's 5400 disk drives just to store the data. Add in the the extra drives for parity, and a few hundred hot spares, this thing could easily use OVER NINE THOUSAND drives.

Re:Thousands of disk drives. (5, Informative)

noggin143 (870087) | about 7 years ago | (#19936065)

We are expecting to record around 15PB / year during the LHC running. This data is stored onto magnetic tape with petabytes of disk cache to give reasonable performance. A grid of machines distributed worldwide analyses the data. More details are available on the CERN web site

Re:Thousands of disk drives. (1)

complex(179,-70) (1101799) | about 7 years ago | (#19936391)

Using, what, 2A @ 5V + 0.5A @ 12V peak while spinning up? That's 18kA @ 5A + 4.5kA @ 12V ; 144kVA. That's a lot of whacking big AC/DC convertors, and some serious plumbing to make sure all these drives get a stabile power feed at all times.

Re:Thousands of disk drives. (2, Insightful)

UnHolier than ever (803328) | about 7 years ago | (#19936671)

How much is a 500Gb drive worth nowadays? 150$? So your OVER NINE THOUSAND drives are worth about, hum....1.35M$. CERN has a budget of about 5B$. It's the speed at which data is coming that's a problem. Not the total amount of data.

Re:Thousands of disk drives. (0)

Anonymous Coward | about 7 years ago | (#19937241)

I'm not sure whether to be ashamed or proud of slashdot that no other responder got the joke.

Where's the problem? (1)

femto (459605) | about 7 years ago | (#19935847)

1GB/s * 1 month = 1GB/s * 30 day/month * 24 hour/day * 3600s/hour = 2,592,000 GB.

A big disk (Seagate ST3750640AS) is 750GB.

324,000 GB / 750GB/disk = 3,456 disk.

At AUD467 per disk this will cost AUD1,613,952 (plus computers+net). Even cheaper if you allow for the fact these are retail
prices for wholesale quantities. Let's take the startup current of 2A@12V as the worst case power
consumption and we end up with a maximum power of 83kW. That's less than 35 domestic heaters (2.4kW ea).

Okay, it's not trivial stringing together 3,456 disks, but it's not exceptional either. It is no bigger in
scale than a typical university network. Or just buy a few of the Internet Archive's Petaboxes [] off the shelf.

Re:Where's the problem? (0)

Anonymous Coward | about 7 years ago | (#19935883)

They should use this Sun Microsystems switch [] which comes with 3,456 ports. Perfect fit.

Ah, engineers. (1)

onebuttonmouse (733011) | about 7 years ago | (#19935885)

'During this one month, we need a huge disk buffer,' says Pierre Vande Vyvre, CERN's project leader for data acquisition. One might call that an understatement.
I expect he referred to the problem of finding the God Particle as "distinctly non-trivial".

Fun problem (2, Insightful)

bob8766 (1075053) | about 7 years ago | (#19935919)

The network is one thing, but just processing that amount of data is incredible.

200 computer breaks the 1GB chink into more manageable 5MB/Sec chinks of data, but then they still need to handle the metadata that figures out how to put it all back together. On top of this they'll need to have some redundancy in case of data loss, and how the load is redistributed if a machine croaks.

These are good problems, it would be a fun system to work on.

Re:Fun problem (1)

xyvimur (268026) | about 7 years ago | (#19935951)

It's a hell of fun to work on.

Re:Fun problem (1)

pipingguy (566974) | about 7 years ago | (#19936509)

What's this 'Google' thing I keep hearing about?

Idea (1)

bguzz (728614) | about 7 years ago | (#19935983)

./ | bzip2 > results.bz2 Problem solved!

Re:Idea (3, Funny)

KillerCow (213458) | about 7 years ago | (#19936033)

./ | bzip2 > results.bz2 Problem solved!

No. No, my friend; you do not grasp the scale of this project.
./ | bzip2 | bzip2 > results.bz2

Not So Huge (5, Informative)

PenGun (794213) | about 7 years ago | (#19936083)

It's only 5x HD SDI single channel ~ 200MB/s. Any major studio could handle this with ease.

SDI is how the movie guys move their digital stuff around. A higher end digital camera will capture at 2x HD SDI for a 2K res, 4:4:4 colour space. A few of em' and you got your 1GB/s easy. Spools onto godlike RAID arrays.

  Get em' to call up Warner Bros if they have problems.

It's not that much, really (1)

diamondsw (685967) | about 7 years ago | (#19936175)

1GB/sec is 3.6TB/hour, or 86.4TB/day, or 2.5PB in a month. That's really not all that huge for enterprise or scientific storage. I see that all the time in hosted environments.

Just A Thought (0)

Anonymous Coward | about 7 years ago | (#19936211)

I was wondering how this ranks against what Google handles in a month. Either way, I'm sure Google's got plenty of storage to handle the needs for the experiment.

E-Mail it to Google (2, Funny)

Nom du Keyboard (633989) | about 7 years ago | (#19936285)

Just e-mail it all to Google. By then gMail should be able to handle that much per user.

CERN DAQ is generally impressive (5, Interesting)

torako (532270) | about 7 years ago | (#19936309)

It's important to distinguish between the amount of data generated during an event right in the detector and the filtered data that in the end will be kept and saved on permanent storage. The ATLAS detector, for example, has a data rate in the order of terabits per sec during an event. There's a pretty sophisticated multi-level triggering system whose purpose it is to throw out most of that data (~98%) and only look for interesting events.

Right now, the average event size for ATLAS is 1.6 MByte and the system is designed to keep around 200 events per second, or roughly 300 MByte. This isn't much of course, but you have to consider that the bunch crossing rate (i.e. the rate at which bunches of protons will collide and generate events) is 40 MHz.

So you have to design a system that boils this rate from 40 MHz down to 200 Hz and only keeps the interesting parts, while also buffering all the data in the meantime. For this reason, the first trigger level is entirely implemented in hardware right in the detector and reduces the rate down to 75 KHz with a latency of 2.5 s. The rest of the trigger works on clusters using Linux computers and has a latency of o(1s).

Better yet... (2, Interesting)

curryhano (739574) | about 7 years ago | (#19936333)

...all this data will be distributed to a handfull of TIER1 sites (CERN is TIER0) all over the world (about 10). At the TIER1 sites the data will be preprocessed. The TIER1 sites distribute their preprocessed data to TIER2 sites which are the places where the international scientists work. I work at a TIER1 site and we face a lot technical challenges with this project. At a TIER1 site as I mentioned, the data is preprocessed too, so we will need a compute cluster and the necesary bandwith internally to move the data around. With each new software release (about every six months), ALL raw data has to be reprocessed with the new software. All results have to be stored. So for every part of raw data we will have to store preprocessed data for every software release. Of course a lot of data will be stored on tape but we expect that the dataflow from CERN (for us 150MB/s to disk and 75 MB/s to tape) will be the least of our problems. Moving the data around and preprocessig the data is probably a bigger problem in the long run. An the fact that the machine will be running for about 15 years or so, this will be a very long run!

way too late/early (0)

ducomputergeek (595742) | about 7 years ago | (#19936343)

After coming home from a party I read this as "CERN Searches for GOLD Particles." Thought to myself, WTF?

Not really that much storage bandwidth... (2, Insightful)

RulerOf (975607) | about 7 years ago | (#19936365)

TFS makes a point about storing 1 GB (presumably GigaBYTE) of data per second, but THAT feat is already in widespread use, spefically for the digital manipulation of 4k film. The company that produces the systems that process this film data is called Baselight [] .

Basically, 4k film, at a resolution of 4096x3112, requires approximately 50MB per frame @ 24 fps. That comes out to about 1.22GBps, and maninuplating the data doubles it to 2.44GBps. The systems[PDF] [] that Baselight sells run 8 nodes and 16 processors, and it's all built with commodity hardware and some flavor of Linux. Apparently they use 3ware RAID cards... and I found out about this by browsing 3ware's site when I was shopping for a RAID controller.

Either way, my point is, it's been done, and there's a real world application that requires that type of data storage bandwidth and has nothing to do with scientific data. :P

I'd like something like that (0)

Anonymous Coward | about 7 years ago | (#19936393)

...will use 500 optical fiber links to feed particle collision data to hundreds of PCs at a rate of 1GB/second, every second, for a month.

My God, think of the porn you could download with that setup. It would be biblical (in a pornographic sense).

Choice of filesystem (1)

Tracy Reed (3563) | about 7 years ago | (#19936409)

I am really surprised they did not use the Lustre filesystem [] for their data storage since it is vendor neutral, open, and designed for exactly this sort of thing. The lustre guys report being able to obtain tremendous bandwidth and scalability. I have not yet been able to play with Lustre but I look forward to doing so.

ALICE is not Higgs Hunting (2, Informative)

Roger W Moore (538166) | about 7 years ago | (#19936423)

The ALICE experiment is actually concentrating on heavy ion collisions which is why they only worry mainly about one month/year, the rest of the time the machine is running protons for the other experiments, ATLAS and CMS, which will look for the Higgs. ALICE will hopefully study the quark gluon plasma but, as far as I know, has no plans to look for the Higgs.

The CIO article is incomplete (2, Informative)

quarkie68 (1018634) | about 7 years ago | (#19936747)

OK, we got a half way overview of CERN's decision, with some bold statements of questionable validity. I am submitting the criticism purely on the grounds of being really interested in large data storage, I don't work for any large storage vendor, but I am an architect of storage systems.

First of all, with the statement "and it's (StorNext) completely vendor independent": Lot's of other solutions provide flexibility about choosing the hardware vendor from a theoretical perspective. The theory says that if vendor A makes a SAN, vendor B makes a RAID controller, C a disk cabinet and D offers a clustered FS, and all comply to the relevant standards, you can plug them together and expect them to function. However, imperfections in the standards, hidden proprietary optimizations, always dictate certain configs and combinations for optimum performance. There is a lot of work to be done in the StorNext and other similar products, until they claim full flexibility. My experience in deploying a StorNext based solution on a 1200 node setup says so and to keep the post short, I shall exclude at this stage vendor details, but if someone is interested, I am happy to go over the details. There is vendor dependence if you wish optimum performance. Not to mention that if you mix and match the RAID and SAN cards in the setup, any unfortunate issue might end up in a multi-headache, even if you have solution support (A blaims B, B accusses A, and the game of ping-pong begins). You can never exclude vendor dependence in such a large setup, you have to deal with it.

Then you have the "Clustered file systems are still an evolving category, she says, but enterprise IT is warming up to it.". I can imagine what the author classes as enterprise IT here, but I think there is a bit of an orientation issue. CERN is not exactly the classical enterprise IT environment, is it? Not in terms of their requirements for resilience and capacity. These FAR EXCEED enteprise IT requirements. CERN is a research setup. And the mentality of a research setup (that incubated the WWW after all) is (or should be) that of innovation and playing with some of the latest and the greatest. In fact, some US based research setups have long experimented with other cluster FSes. They are not warming up. CIO claims that StorNext is scalable. It is. But to what extent? Have they excluded for example things such as Lustre? [] If yes, why?

Hang on... (1)

skinfitz (564041) | about 7 years ago | (#19937041)

...just because a SAN is connected at 1Gbit to a machine does not mean there is 1 Gbit of data passing over there all the time.

If I were to write up my house network I could say 'network switches feed data to several computers at 1Gbit per second' - this would be true if I only use it for web browsing - doesn't mean I'm saturating my bandwidth.

Backup options (5, Funny)

Mostly a lurker (634878) | about 7 years ago | (#19937207)

I assume they will want to have more than one copy of this for backup purposes. Here is my analysis on their choices. The total data to be backup up (for the month) is taken as a lazy 1 * 60 * 60 * 24 * 30 = 2,592,000 gigabytes
  • Printed hardcopy. Many authorities recommend this as you do not need to worry about changes in data formats over time. For exact calculation, we would need to know the font they were planning to use and the character encoding. However, let's take a working assumption that they can cram 10KB of data onto an A4 sheet. That implies 259,200,000,000,000 pages. They will probably not want to use an inkjet printer if they use this solution and may, indeed, choose to acquire multiple printers and split the load. A single printer at 10 ppm would take approximately 50,000 years to complete the backup. On 70gm paper, it would weigh a little over two million tons. At any rate, this would certainly produce reams of output.
  • Diskettes. This was good enough for nearly everyone 15 years ago. It is curious that such a tried and trusted technique is no longer in fashion. I assume regular 3.5" 1.44MB diskettes, generally recognised as easier to handle than 5.25". We shall need around 1,800,000,000 diskettes. One drawback is the person changing the diskettes as each one filled up might become a little bored after a while. On the positive side, the backup will be quite a lot faster than the printed solution. Assuming about one diskette per minute, inclusive of changing disks, the backup could be complete in less than 3,500 years.
  • Now considered somewhat old fashioned, punch cards were once a mainstay of every programmer's personal backups. Like printed hardcopy, anyone familiar with the character encoding used, could read the data without needing any access to a computer. If we assume 80 column cards, we would need 32,400,000,000,000 cards. I would be somewhat concerned about the problem of getting this stack of cards back in the correct order if I dropped it. With a weight of about 30 million tons and stretching perhaps 6 million miles end to end, handling certainly would be challenging and an accident very possible.
  • Paper (punched) tape was the only alternative on the first computer I used, a basic early model Elliott 803 without the optional magnetic tape. If I recall correctly, you could manage about 10 characters per inch, so you would need a paper tape over 4,000,000,000 miles long. Hmmm, that would be silly. The other solutions are clearly better.
I am sure other options will be considered, but I just wanted to bring these up in case CERN had failed to consider them
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account