Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

CERN Testing Cloud For Crunching the Universe's Secrets

samzenpus posted about a year ago | from the silver-lining dept.

Cloud 67

Nerval's Lobster writes "The European Organization for Nuclear Research (known as CERN) requires truly epic hardware and software in order to analyze some of the most epic questions about the nature of the universe. While much of that computing power stems from a network of data centers, CERN is considering a more aggressive move to the cloud for its data-crunching needs. To that end, CERN has partnered with Rackspace on a hybrid cloud built atop OpenStack, an open-source Infrastructure-as-a-Service (IaaS) platform originally developed by Rackspace as part of a joint effort with NASA. Tim Bell, leader of CERN's OIS Group within its IT department, suggested in an interview with Slashdot that CERN and Rackspace will initially focus on simulations—which he characterized as 'putting into place the theory and then working out what the collision will have to look like.' CERN's private cloud will run 15,000 hypervisors and 150,000 virtual machines by 2015—any public cloud will likely need to handle similarly massive loads with a minimum of latency. 'I would expect that there would be investigations into data analysis in the cloud in the future but there is no timeframe for it at the moment,' Bell wrote in a follow-up email. 'The experiences running between the two CERN data centers in Geneva and Budapest will already give us early indications of the challenges of the more data intensive work.' CERN's physicists write their own research and analytics software, using a combination of C++ and Python running atop Linux. 'Complex physics frameworks and the fundamental nature of the research makes it difficult to use off-the-shelf [software] packages,' Bell added."

cancel ×

67 comments

Sorry! There are no comments related to the filter you selected.

CPU vs GPU (1)

boulat (216724) | about a year ago | (#44159871)

Use of CPUs from cloud-based providers is not as efficient for computations as using multiple GPUs linked together on a custom built setup. Using hypervisors instead of barebones for computational work further reduces efficiency by another 10-15%. This is a waste of money, and poorly done systems analysis.

Re:CPU vs GPU (1)

Anonymous Coward | about a year ago | (#44159953)

GPUs linked together in a custom built setup is small-time thinking, and wastes a lot of valuable researcher time. Having computing resources managed in the "cloud" really does make sense, and is going to happen, because it will be easier to maintain and write code for in the long run. It seems like there is a vocal minority of anti-cloud technology here on /. , get ready, though, it's the future, even if you like building your own mash-up of bargain bin hardware. ;)

Re:CPU vs GPU (0)

Anonymous Coward | about a year ago | (#44160659)

get ready, though, it's the future

Whose future? Not mine. There might be situations where it makes sense, but it doesn't make much sense to store important personal data in the cloud (but that is not this, of course).

Re: CPU vs GPU (1)

crdotson (224356) | about a year ago | (#44161651)

I assume you don't use the cell phone or PSTN networks for anything personal, either?

Re:CPU vs GPU (0)

Anonymous Coward | about a year ago | (#44161777)

Yeah, all this recent talk of micro computers is hogwash. Who would want to run anything on a micro computer when you can run it on the mainframe from the comfort of your dumb terminal?

Re:CPU vs GPU (1)

BitZtream (692029) | about a year ago | (#44166201)

No it doesn't. 'Cloud' crap is for things that require variable processing power, so you can OCCASIONALLY spike to high loads, without having to build a massive infrastructure yourself for that 2 hours that the spike happens once a year.

CERN crunches massive amounts of data ALL THE TIME. There are no peaks and valleys, there is no benefit to letting someone else charge you extra to run your software in a reserved hypervisor instance. Its the exact opposite of efficiency.

You do not use virtual machines for real servers that require real processing. You use them for that shitty little project that doesn't require a full server, but for varying reasons you don't want to have consume a full server. A perfect instance is a web server for a small company or better still, some departments SharePoint server within your own organization.

Get a clue: You don't understand virtualized computing in the first place.

Re:CPU vs GPU (0)

Anonymous Coward | about a year ago | (#44172197)

CERN crunches massive amounts of data ALL THE TIME. There are no peaks and valleys, there is no benefit to letting someone else charge you extra to run your software in a reserved hypervisor instance. Its the exact opposite of efficiency.

No, quite a few of their machines will sit idle for some time, although exactly how much depends on which project and set of machines you are talking about. Some machines for number crunching will be in use most of the time with queues, except when the portion of computing power available is not suitable for any projects in queue. Machines for querying and due data cuts and data distribution can spend a lot of time up and a lot of time down, as they need to be available for requests, but there are not always requests going for them once the data has been transfered elsewhere. Those are more on par with webservers, except that they are serving very large chunks of data.

Re:CPU vs GPU (1)

Anonymous Coward | about a year ago | (#44160019)

Since you are clearly more qualified to make development, porting and maintenance labor vs hardware cost trade off decisions than the the people involved at CERN, why don't you go help them out a bit?

While you are at it, feel free to train some of the world top physicists to stop writing "their own research and analytics software, using a combination of C++ and Python" and have them learn to code for GPUs, and port all their existing code. Clearly thats the best use of their time. This is research: theres a ton of code, a lot of which is not used for extended periods of time (perhaps just run once) and is very technical. Development and learning of tools is a huge cost here. I'd assume they tool all the relevant factors into account (including tons of lobbying money from various providers and vendors) and came up with a good solution.

its true that they didn't go with the most efficient hardware solution. Sure, bare hardware would be better. In some cases GPUs would beat that. In some cases custom silicon would beat that. The goal isn't to optimize for performance here, its to optimize for cost, which includes development, learning tools, and migrating to what ever they use after this. Its complicated, and I assume they went to a lot of work to figure it all out.

Re:CPU vs GPU (0)

citizenr (871508) | about a year ago | (#44160107)

aww thats cute, you expected them to be competent. They write heavy computational problems in Python.

>using a combination of C++ and Python

Re:CPU vs GPU (4, Informative)

Anonymous Coward | about a year ago | (#44160271)

aww thats cute, you expected them to be competent. They write heavy computational problems in Python.

>using a combination of C++ and Python

Python is used only for configuration, interfacing (as a glue), and job steering. We are not that incompetent you know ;).

Re:CPU vs GPU (1)

K. S. Kyosuke (729550) | about a year ago | (#44163655)

Python is used only for configuration, interfacing (as a glue), and job steering. We are not that incompetent you know ;).

Unless you're using PyPy, that's all that Python is used for anywhere, obviously.

Re:CPU vs GPU (0)

Anonymous Coward | about a year ago | (#44161035)

Pretty much all of the particle physics code I've ever seen and worked with has been written in either C or Fortran for the computation. In the last couple years there has been a big move to doing configuration in things like Python, and for years before that, a lot of visualization was done in Python too. The computation part is still done in C or Fortran, either via completely separate programs, or through writing custom functions in C for Python which is pretty straightforward once setup.

And so far, that change has been great from my perspective, as it takes less time to write and maintain the UI. The exception might be some of the older Fortran code that was easy to maintain, but a huge pain to actually feed input into resulting in every generation of grad students learning the code by writing their own scripts to just configure it. I got into this line of work because of the physics, not to write UI code boilerplate.

Re:CPU vs GPU (0)

Anonymous Coward | about a year ago | (#44163329)

For some reasons Python is rather popular with scientists.

Re:CPU vs GPU (1)

BitZtream (692029) | about a year ago | (#44166229)

So? Thats called intelligent design. You write processor heavy code in a low level language by expensive developers that take longer amounts of time, then have someone else, who costs less and can do more 'visible' work faster using a high level language.

I suppose you think the major AAA title games engines are written by incompetent developers too then, right?

Re:CPU vs GPU (1)

citizenr (871508) | about a year ago | (#44166741)

I suppose you think the major AAA title games engines are written by incompetent developers too then, right?

Major AAA titles nowadays tend to be released on licensed engines written by competent people. But we do get a lot of hilariously badly written games like World Of Tanks (Python=single threaded, engine originally intended for Korean point and click mmrpgs), EVE Online (Python even server side = single threaded bottlenecks everywhere. Most recent "innovation" slows time to handle lag).

Its sad when places like Facebook have the best approach to solving computational problems (I especially like their disaggregated rack project) .. just so they can serve advertisements quicker.

Re:CPU vs GPU (0)

Anonymous Coward | about a year ago | (#44160273)

I'll bet you have a kick-ass gaming rig too.

My, aren't you special (2, Funny)

Anonymous Coward | about a year ago | (#44160423)

My, aren't you special.

Telling the organization with a datacenter containing 65,000 cores, 30 petabytes of data and also, incidentally invented the Web, how to set up their computers.

Re:My, aren't you special (1)

BitZtream (692029) | about a year ago | (#44166307)

Just because someone working for your organization 50 years ago did something great doesn't mean anything anyone is doing there now is impressive.

Not saying that CERN isn't doing impressive things, but you seem to not understand that organizations are not universally made up of the same people you read a news story about 20 years ago.

As a systems architect, you'll be hard pressed to convience me that moving 65k cores and 30 petabytes of data to the cloud is intelligent. You will still need the same number of people to manage it, except now each core is going to cost twice as much, and you're now suddenly getting charged a bunch of fees that never existed before, and getting less performance.

If you pay for any reserved instance in the cloud, it costs far more than managing the same processing power yourself.

Re:CPU vs GPU (1)

the gnat (153162) | about a year ago | (#44160711)

Use of CPUs from cloud-based providers is not as efficient for computations as using multiple GPUs linked together on a custom built setup.

This assumes that GPUs are actually suitable for the task at hand. I work in a very different branch of the computational sciences, but I can testify that GPUs are near-useless for most of what we do. If a "systems analyst" gave us advice like yours, I'd be furious.

Re:CPU vs GPU (1)

K. S. Kyosuke (729550) | about a year ago | (#44163679)

I work in a very different branch of the computational sciences, but I can testify that GPUs are near-useless for most of what we do.

What exactly is the problem in your application area?

Re:CPU vs GPU (1)

BitZtream (692029) | about a year ago | (#44166333)

Branching? You do realize GPUs absolutely suck ass at any sort of branch right? So ... say ... anything except raw number crunching, sucks on a GPU.

Go ahead and write a search algorithm that runs solely on GPUs ... then watch it get out performed by an Arduino.

Re:CPU vs GPU (1)

K. S. Kyosuke (729550) | about a year ago | (#44166773)

Branching? You do realize GPUs absolutely suck ass at any sort of branch right?

CPUs today also suck ass at any sort of branching. If branching is what you want, go for Forth chips. You can branch randomly every few clock cycles and not notice it.

Go ahead and write a search algorithm that runs solely on GPUs

How is *that* a problem, unless the instruction set is completely botched? You'd have much more trouble with the memory subsystem than with the processor's inability to branch, since your ordinary GPU memory shines at coherent access but sucks at latency.

Re:CPU vs GPU (1)

the gnat (153162) | about a year ago | (#44168091)

What exactly is the problem in your application area?

The main problem is that there's no single bottleneck where parallelization really helps. We do a lot of FFTs, but those only account for maybe 25% of total runtime - and they're mixed in with a lot of other calculations (and yes, branch points), mostly called by the LBFGS minimizer. The memory transfer overhead makes it especially difficult. We could probably figure out a way to make it work, at enormous cost (for us) in terms of manpower, but there are many other algorithmic improvements that we could make which would be at least as effective and would still run on CPUs. It's not at all like molecular dynamics where you have a bunch of approximately O(N^2) loops that take up most of the time.

Re:CPU vs GPU (1)

K. S. Kyosuke (729550) | about a year ago | (#44168613)

I thought it would be something like this. What do you make of the AMD's new unified architecture that should allow you to share memory between GPU and CPU simply by means of passing a pointer?

Re:CPU vs GPU (0)

Anonymous Coward | about a year ago | (#44172221)

LBFGS minimizer

I thought GPGPU versions of that had been around for a couple years, including an open source one...

Re:CPU vs GPU (1)

turbidostato (878842) | about a year ago | (#44160955)

"Use of CPUs from cloud-based providers is not as efficient for computations as using multiple GPUs linked together on a custom built setup."

Per spent dolar? On a "pay as you go" fashion?

"This is a waste of money, and poorly done systems analysis"

Of course yes. Because your silver bullet is the real silver bullter, of course.

Re:CPU vs GPU (2)

Charliemopps (1157495) | about a year ago | (#44161887)

But you're assuming CERNs going to be using 100% of capacity at all times. Which they're not, and their needs are going to change a lot as well. They probably have to have dedicated staff that just builds and maintains this shit all day long. If they can pay a SAS provider to handle it all, yea, it's less efficient, but it might be cheaper for them because the SAS provider could use the same equipment to do work for cancer researchers when CERN isn't using it. If they can get a way to price it based on calculation or cycles, then CERN could even put out a project with a fixed price and wait for the price to come down to what they're willing to pay. If they have something they need crunched asap, they could jack up what they're willing to pay and get it queued up sooner. Basically different research projects could big against each other and make a kind of super computer marketplace.

Yo dawg I heard you like really complex machines (-1)

Anonymous Coward | about a year ago | (#44159881)

So I built a giant computer to predict the results of yo giant magnet.
So you can use massive amounts of energy while you use massive amounts of energy.

Just a thought along the side-line (3, Informative)

vikingpower (768921) | about a year ago | (#44159915)

"using a combination of C++ and Python running atop Linux"... I just started to use Julia, a rather new programming language for technical computing [julialang.org] , and I am truly, truly impressed. I got interested by the benchmarks these guys published, and may be reporting back here in a couple of days with first experiences from implementing a Lucas-Lehmer test for Mersenne primes. Is Julia something for CERN ? I mean, you don't get to swim in the pool full of bugs that C++ can quickly become...

Re:Just a thought along the side-line (0)

Anonymous Coward | about a year ago | (#44160609)

Julia is not a solution to bugs. And it's not as efficient as C++. It's something nice to use on top of mature scientific libraries - written in C, C++ - if you don't mind learning a new language.

Re:Just a thought along the side-line (1)

K. S. Kyosuke (729550) | about a year ago | (#44163743)

And it's not as efficient as C++.

Except that C++ is not actually all that efficient, unless you do a lot of tweaky stuff by hand in it. There are a lot of things you can do with dynamic compilers that you can't do with precompiled libraries. Deep inlining and extensive IPO/IMO comes to my mind. People have hacked it onto C++ but that's like bolting extra legs onto a dog to turn it into an octopus.

Add to that the fact that Julia is homoiconic and supports much more expressive, arbitrary compile-time transformations and you're in for a treat. Do you need an automatic differentiation pass? You can write it yourself. The C++ compiler will do no such thing for you.

Re:Just a thought along the side-line (0)

Anonymous Coward | about a year ago | (#44164791)

That is nice and all in theory, but in the real world I haven't seen much that does any better than C (and C++ if not going too far with some features) or Fortran for numerics, and that is without spending too much time hand tuning it. For a lot of computation code, there is not that many language features actually used by the inner loop, number crunching parts, and it comes down to which language is the fastest, and for those within ~5-10% of each other, which you have experience/preference/pre-existing libraries for.

Re:Just a thought along the side-line (1)

BitZtream (692029) | about a year ago | (#44166361)

C++ via GCC is inefficient, pretty much every other compiler I've ever worked with does well.

Stop using shitty compilers and you'll find the language not so inefficient.

Re:Just a thought along the side-line (1)

PiMuNu (865592) | about a year ago | (#44163449)

CERN has invested in about 5 million lines of C++ code (google GEANT4 and ROOT) - there is no backing out of C++ now. Python is nice because it can sit on top of the C++ backend and provide less buggy UI. It is also becoming the de facto standard for scientific computing (not just in HEP).

rackspace?! (4, Insightful)

Blymie (231220) | about a year ago | (#44159939)

Rackspace?!

Wait, what?!

Rackspace is the most *horribly* run hosting service of all time. I could go on for hours and hours and HOURS describing how inept and incapable they are.

From months to source SSDs, to providing horrible support, and utter incompetence on the part of their staff... I mean, they're HORRIBLE! Just plain horrible. If any of their automation breaks down? Well, good luck getting help FAST. I mean, if a VM move fails, well.. maybe you'll get help in 24 *hours*.

Maybe. If it's the weekend, well.. or at night... well, after all, people only use the internet during the day!

And if anything is even slightly outside of the box? Good luck with that!

No, no, no. Not to mention, expensive. I was saddled with these boneheads when a PHB decided they were a great idea! Meanwhile, they take MORE time out of your day, than just maintaining hardware servers in a data center, because if anything goes wrong?

Well, emails, calls, conferences, blah blah blah. In 1/10th of the time it would take for rackspace to fix ANYTHING, I could just tell a traditional data center to reboot my box, or install a new one.

Hell, I've had VMs@Rackspace that were HUNG, that would NOT respond to the web console reboot command. TIme to get that fixed? HOURS. Christ, just GET IT FIXED.

And cost? COST! PHB made me use these boneheads. We leased two Dell R720s. For the cost of 3 MONTHS worth of the lease, I could have bought a better equipped R720! Or, hey, maybe TWO Supermicro servers!

Rackspace is a time sucking hole in the ground. It's "expert" admins will suck your time away. Hell, I had to put off dozens of projects, whilst I dealt with their constant and continual fuckups, the phone calls, the emails, the explaining to them how to fix simple thing!

Heck, don't even get me started with Rackconnect, good god. Worse, buggy as hell as it is (or at least was), they had all sorts of problems with their automated iptables scripts. I snag it, debug it, and realise that some conehead there can't write simple bash...

Fix it...

Report the fix...

And am still suck with months, I repeat MONTHS of their script being used on my boxes, with no way to replace it (it was scp'd in on boot), and therefore broken firewall rules all over the place. MONTHS, when I provided them with a fix! A ONE LINE FIX AT THAT!

No, no, no, no, NO they are horrible, stay away, run the other way, my god stay the hell away from Rackspace, the most useless company on the planet!

If any of you, I repeat ANY of you want more detailed info, please let me know.... I hope they burn in flames as they go down into a tarpit in hell!

Re:rackspace?! (0)

Anonymous Coward | about a year ago | (#44160047)

So tell us how you really feel ;-)

Re:rackspace?! (2)

Blymie (231220) | about a year ago | (#44160757)

The sad part is, I did hold back. Mostly, due to post length and the fact that I don't want to spend the next week writing it up.

Suffice it to say, that I have an archive of 100s of Rackspace emails, and 60 or 70 phone calls, all stored because we were positive we'd have to sue their ass.

Yes, they were that bad, and showed that much incompetence.

Re:rackspace?! (0)

Anonymous Coward | about a year ago | (#44160049)

It's due to Rackspace Private Hosting's reliance on Chef / experience around Chef.

Re:rackspace?! (0)

Anonymous Coward | about a year ago | (#44160067)

which cloud service are you recommending?

Re:rackspace?! (1)

Blymie (231220) | about a year ago | (#44160741)

Not Rackspace.

There are lots of other fish out there.

Re:rackspace?! (1)

stratdesign (316189) | about a year ago | (#44161797)

Rackspace is the most *horribly* run hosting service of all time. I could go on for hours and hours and HOURS describing how inept and incapable they are.

I'll see your Rackspace and raise an Accenture.
All the competence of Rackspace for only 10x the cost!

Re:rackspace?! (0)

Anonymous Coward | about a year ago | (#44166841)

I am working next to an office of Accenture employees. They do good work.

Re:rackspace?! (0)

Anonymous Coward | about a year ago | (#44162831)

Fun fact: this entire post has nothing to do with rackspace, other than that rackspace originally developed openstack. As TFS notes, the whole shabam will be running on a private openstack.

Re:rackspace?! (0)

Anonymous Coward | about a year ago | (#44162935)

I agree 100%. I'm currently at the last leg of having to deal with these pricks because the company I'm working for still has a couple services running on there. They realized that maybe Rackspace's buzzword spewing self is not as great as it sounds since they got two servers with twice the cores and about 6 times the RAM for the price of what they were paying for one of those crapboxes.

EPIC (0)

Anonymous Coward | about a year ago | (#44160145)

"The European Organization for Nuclear Research (known as CERN) requires truly epic hardware and software in order to analyze some of the most epic questions about the nature of the universe.

In that case they should use Itanium [wikipedia.org] processors.

since the NSA spys on everything (1)

FudRucker (866063) | about a year ago | (#44160149)

you can bet the cloud quickly being abandoned by almost everybody

Re:since the NSA spys on everything (1)

LifesABeach (234436) | about a year ago | (#44160175)

I'm a little curious how much the tax payers are going to fork over in order for the NSA to be set free.

Re:since the NSA spys on everything (1)

the gnat (153162) | about a year ago | (#44160779)

you can bet the cloud quickly being abandoned by almost everybody

Except that CERN probably isn't too worried about the NSA spying on their exciting particle detector analysis. Maybe if there was something extremely proprietary in there, they might care, but I suspect even most (American) companies won't give it a moment's thought. I hate to resort to the cliche "If you have nothing to hide, you shouldn't be afraid", but as far as scientific research is concerned this is largely true. I work for a government agency and all of our computers issue disclaimers that we basically have no privacy; I also assume that any US citizen can file an FOIA request, etc. So I act accordingly, and don't use our computers for anything I would particularly mind being publicly broadcast - and I don't particularly bother to hide what I'm working on from my competitors either. Not once has this caused me any worry. CERN has been around for ages and the people there are deeply committed to academic research, so I suspect they're not very worried either.

Of course it sucks to have to adopt this attitude towards everyday life - but that's a very different concern.

Re:since the NSA spys on everything (1)

FudRucker (866063) | about a year ago | (#44161085)

how about private companies and corporations that want to keep trade secrets out of the wrong hands, especially since the government is fascist when it comes to the private sector, i am sure the government would use their spying on data to help their fascist partners in the private sector at the same time thwarting the competition,

@home? (1)

cold fjord (826450) | about a year ago | (#44160315)

I wonder if there is any opportunity for public participation?

seti@home [berkeley.edu]
folding@home [stanford.edu]
GIMPS [mersenne.org]

cern@home ????

Re:@home? (0)

Anonymous Coward | about a year ago | (#44160501)

They already have such a project, it's just rarely used:
http://lhcathome.web.cern.ch/LHCathome/

Re:@home? (0)

Anonymous Coward | about a year ago | (#44163663)

That works for problems with many calculations on small datasets. The CERN calculations are mostly few calculations on huge datasets. The @home members just don't have enough bandwidth or memory.

Cloud For Crunching the Universe's Secrets (1)

rossdee (243626) | about a year ago | (#44160361)

I'll save them some time

42

Re:Cloud For Crunching the Universe's Secrets (0)

Anonymous Coward | about a year ago | (#44163883)

but what's the question?

IaaS (1)

aXis100 (690904) | about a year ago | (#44160447)

We used to call it "rental".

Gotta love "as a service" buzzwords. They have come full circle now :)

Re:IaaS (0)

Anonymous Coward | about a year ago | (#44161901)

I read the wiki article about this and still have no idea what it means...

Re:IaaS (0)

Anonymous Coward | about a year ago | (#44162851)

IaaS is simply infrastructure you can rent, as opposed to SaaS (software as a service, think google apps) and PaaS (platform as a service, think virtualized hosting). Using IaaS means you can rent a specific amount of computational power and memory for a specific amount of time, and as with all cloud computing, the important part is that you can easily scale up. That means, if your LHC is producing a shitton of data at a collision, you can scale up your computation power to match, and then scale down again after the collision is over and you're warming up for the next one.

But feel free to ignore this post and hate "the cloud" ;).

Virtual Universe (1)

jfdavis668 (1414919) | about a year ago | (#44160991)

Our universe Is just a simulation run in the cloud in another universe.

newsflash: the Higgs fraud didn't exist (0)

Anonymous Coward | about a year ago | (#44161129)

but the propaganda machine is still milking the sheeples by churning out free publicity for CERNS. This means more funding and more fleecing of the average joes.

The incontrovertible fact is that exactly *zero* people here have ever seen the Higgs. A side interpretation is that we're all joes.

The other fact I forgot to list in a previous post is that exactly *zero* people here have ever seen a nukular exposion.

Let's deal with facts and forget science and logic because the messages they pumped out are controlled.

When Shelldrake was kicked off for listing out the ever changing "universal constants"...we know the establishment is a huge pile of control freaks that reach the moon and back.

This is the internets..learn, grow, break free of bs.....

CERN IT is quite big... (1)

fa2k (881632) | about a year ago | (#44162637)

The reason for using a cloud is consolidation of resources, manpower and experience. Most companies are better off outsourcing some things because they wouldn't utilise their on premises resources near 100 % (e.g. at night, in vacations). CERN can run simulations all of the time, so there is always demand, and they can hire many experts without them "idling" most of the time. I don't think public clouds are a must for them and I'm even skeptical of VM technology, because they are dealing with friendly code in batch jobs, which need as much performance as they can get. Unix multiprocessing and user limits should be able to handle this, perhaps coupled with chroots if required, to be able to support different userland libraries for different experiments. They can surely benefit from the great work that's being done on open source cloud management though.

Wasting more power to prove math equations (-1)

Anonymous Coward | about a year ago | (#44162793)

Typical stooped European scientists, bet they don't use IE --> wait till France gets fully nuclear weapons disarmed and CERN is turned into a Nuclear waste dump ;)

Mo' computers, mo' problems (1)

TheMathemagician (2515102) | about a year ago | (#44163113)

I'd say the astronomical (quantum mechanical?) amount of computing power required is more indicative of a lack of progress or any real theoretical ideas. The rapid progress in theoretical physics of the 20th century happened via theoretical breakthroughs and experimentation not computing.

Re:Mo' computers, mo' problems (1)

K. S. Kyosuke (729550) | about a year ago | (#44163771)

But the experimentation itself needs a lot of computation. How do you propose to interpret the raw measurements from the sensors without computers?

Re:Mo' computers, mo' problems (0)

Anonymous Coward | about a year ago | (#44163827)

apart from the fact that the processing power is needed for data treatment NOT simulations, we'd be happy to stop doing punny simulations and just run bigger fancier experiments!
Do you know anyone with a few tens of billions to spare for a bigger accelerator and a dozen space probes?
If not then a few mil on simulation will do just fine for now

Re:Mo' computers, mo' problems (0)

Anonymous Coward | about a year ago | (#44164859)

Most of the time when talking of simulations in physics, it is just using computers to solve the PDEs and other kinds of equations developed from theory. It is a pretty basic extension of the theory work, as regardless of our preferences, a lot of physics involves math that is not straightforward and analytically solvable. For some situations, you're not going to get around that, and if you want to apply the math to a real world, complicated situation, or two arbitrary precision, you will need a numeric solver. For many fields, computation is the glue between experiment and theory, it is what lets the two ends talk to each other, as it is what allows you to take the theory and actually make predictions with it.

This term "The Cloud" makes me... (1)

3seas (184403) | about a year ago | (#44163625)

need some sort of radar to see where the hell I am.I recall a time before the bubble burst when it was being said tech start-ups in teh internent had their head in the sky, were not grounded in reality.... well tyehy still are but now they can't even see the ground. And there are mountains around called patents.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?