×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

New Weather Computer

emmett posted more than 14 years ago | from the meteorologists-and-geeks-collide dept.

Science 73

Sarah writes "It seems that the National Weather Service has a brand-new computer which will allow them to predict the weather earlier and more accurately. If I were a kid, I could now plan my snow days in advance..." Yeah, but the teachers would give you enough homework to last you through the day.

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

73 comments

What I could do with a weather computer (0)

shitface (121619) | more than 14 years ago | (#1359375)

If the computer controled the weather then I would rule the world.

Re:5th post baby! (0)

Anonymous Coward | more than 14 years ago | (#1359376)

8th?

I could have used that Cray :( (1)

spiralx (97066) | more than 14 years ago | (#1359377)

The older Cray C-90 computer had been in use since 1994 and was to be offered to other government agencies when replaced, but it was destroyed in a fire last September.

What a tragic waste of a computer. Oh, the humanity!

More power != less warnings (1)

Zule_Boy (45951) | more than 14 years ago | (#1359380)

This makes me think... Now since we have a "suprise proof weather predition system" they can warn us about every drop of rain. Next time they say the snow word, hundreds of morons run to the store and buy all the bread and bottled water they can. this is insanity!

Hmmmmm.... (1)

karb (66692) | more than 14 years ago | (#1359381)

The older Cray C-90 computer had been in use since 1994 and was to be offered to other
government agencies when replaced ...


In related news, Commander Taco announced the newest Andover.net venture : www.slashdot.gov.

Heard this on CBS news yesterdayThe comput (1)

gavinhall (33) | more than 14 years ago | (#1359382)

Posted by NJViking:

The computer sounds very impressive. They will be able to make more long-range forecast as well as predict more factors sooner thereby giving people more warning.

I would think this would also help them with tornado predictions as well.

NJV

What drives theoretical limit? (2)

acfoo (98832) | more than 14 years ago | (#1359383)

The article mentions a 14-day theoretical limit for forecasts. What drives this limit? I know that small-scale weather forcasting is way too complex, and they are talking about county-level forecasts.

Anyone have any experience with weather modeling?

Re:Yeah, but... (0)

Anonymous Coward | more than 14 years ago | (#1359384)

Yes, but it doesn't because LINUX SUCKS

Maybe they'll get it right now... (1)

XenoWolf (6057) | more than 14 years ago | (#1359385)

I don't know about you guys, but I'm glad to see any kind of advancement toward better long-range prediction of the weather. I'm tired of the weatherpeople being wrong about the third and fourth days of the forecast. It was going to be a sunny 53 degrees F today, according to Sunday's forecast, but according to the news this morning, it's going to stay below freezing all day, and it's supposed to sleet this evening.

XenoWolf

Weather computer's predictions (1)

erpbridge (64037) | more than 14 years ago | (#1359386)

"with a warning that major East Coast cities face the threat of snow and severe cold late this week"

Ummmm.... I think it's a little late for that. The cold already arrived, and has been here the past few days (Is 0 degrees in Connecticut severe enough, yet? With a bad wind chill, nonetheless!)

I could've forecast that!

On the other hand, this new computer sounds pretty good. Forecasts over a week in advance? Great!

(Now, what they need to do is make a distributed client, Weather Channel@home, ala SetiAtHome or Distributed.net, to push it to a month in advance.)

Re:Great. (1)

tribbel (103363) | more than 14 years ago | (#1359387)

That's probably why they needed a new computer.
Maybe this one will get some sort of
e-mail-so-that-you-dont-miss-the-forecast-thin gy.
Or maybe it will be able to post to c.o.l.a.
Or leave a message on your answering machine.
Or ....

I'm glad they got the computing resources (2)

crosseyedatnite (19044) | more than 14 years ago | (#1359388)

I'm not a meteorologist (but my sister is...) With computing resources like this, I hope that top management in NWS does not carry the belief that more computing power alone will improve forecasting. You need to have good models behind it.

My sister once told me that met. is "sloppy physics", mostly because many of the variables for their equations aren't measurable, thus they often need to extrapolate or even guess them. Stuff like speed of vertical motion of airmasses, which as I understand aren't measurable via radar (but I could be wrong)

Re:I could have used that Cray :( (0)

shitface (121619) | more than 14 years ago | (#1359389)

Brother, we could all use a Cray. Perhaps that fire was used so that someone could take it home ;)

New Weather Computer. (2)

dkh2 (29130) | more than 14 years ago | (#1359390)

So maybe, and I do mean *maybe* our predicting will get a little better. The sad fact is - most of us get our weather from the local guy/gal who gets a printout from a wire service and interprets what they see on a series of 3 or 4 sequential reports. With experience and a little knowledge of historic trends and fluid dynamics this can get fairly accurate for the short term.

The main benefit we should see from new NOAA computers is more efficient operations. We should be able to analyze storm tracks more quickly in cases of major weather events such as hurricane or tornado. Long range forcasting will still have the same level of best guess scenaria it had last week.
"For every complex problem, there is a solution that is simple, neat, and wrong."

Distributed Weather Prediction (2)

rak3 (43270) | more than 14 years ago | (#1359391)

Now if they would adopt a distibuted computing model for generating weather predictions, I think lots of people would join immediately. Imagine being able to say "I want my CPU power to help calculate the weather for my hometown." That would be cool.

Re:Yeah, but... (1)

QZS4 (7063) | more than 14 years ago | (#1359393)

I wouldn't be surprised if it did. IBM supports Linux, and it's already running on RS/6000, so why shouldn't it run on an SP cluster?

For the moment, though, I would recommend you to use AIX instead, since it has been in the playground a while longer for this kind of systems.

Re:Great. (1)

Munky_v2 (124274) | more than 14 years ago | (#1359394)

Maybe, but maybe not... If this system can do large enough amounts of comparative forecasting. e.g.. pulling comparative data from a large database and looking at that against measurements taken from remote centers around the earth, we might be able to get a pretty accurate World Weather Forecasting System. We could even call it WWFS. I mean if we are able to give it input as to what has happened in the past, and it could look at the world as a whole, then this could be possible. Look at the AMD K6 series processor, it is able to preemptively predict commands with an 80% accuracy rate.


Munky_v2

Downunder (0)

Anonymous Coward | more than 14 years ago | (#1359395)

Boy, could we use one of them here i Oz.

Re:What drives theoretical limit? (2)

schporto (20516) | more than 14 years ago | (#1359396)

Lack of perfeect knowledge. If you could know what the weather at every point in the world was you could accurately predict weather far into the future (the models are quite good). However because you only have weather stations every couple miles (at most) on the ground and the occasional weather balloon you can't know what's happening everywhere. And in the case of weather even small things (damn butterfly's sneezing) can cause large events (oh look a hurricane).
Its kinda like trying to predict where a bouncing ball will land knowing only a few data points. But knowing that someone will interfere at some point (but not when). It becomes quite difficult.
-cpd

786 Processors? (2)

Cycon (11899) | more than 14 years ago | (#1359397)

It's Uccellini's center that uses the new 786 processor IBM SP computer located in Bowie, Md.

...hmmm... 786 processors in this upgrade from the Cray C-90 ... me thinks someone over at the NWS has a sense of humor, and is punning off of Intel's x86 line...

Re:More power != less warnings (3)

Paul Neubauer (86753) | more than 14 years ago | (#1359399)

It could, it if genuinely is "suprise proof" lead to fewer warnings. I rather doubt that even this machine is really "surprise proof" with something as annoyingly comlpex as weather prediction. The real test, and likely downfall, will be to predict truly nasty weather - genuinely severe storms that spawn tornadoes, for example. While these may be somewhat predictable, I rather doubt we'll see forecasts of which paths to get clear of before the twister shreds it. That would be "surprise proof" prediction. Who really cares if the forecast high or low is off by a couple degrees?

The real hope is that there will be sufficiently increased accuracy so that needless warnings are not issued. Hopefully the reverse will not happen and overconfidence cause a failure to issue a warning that deserves to be issued.

As for runs on stores due to snow, maybe that situation happens in places where snow is an unusual event. I happen to be in a place where it is possible, in the colder months, to walk across lakes without getting wet. It is snowing right now and nothing much is unusual. Folks still go to work and all, and there's no rush to stores as this is just like a rain shower here, except when it's over the result can (or has to) be pushed aside.

FYI: Been Forcasted for Some Time Now (1)

Postmaster General (136755) | more than 14 years ago | (#1359400)

This is something about which I remember reading last year some time. Now, before anyone starts flaming that it's still news, yada-yada-yada, let me point something out. I am NOT saying that Slashdot should not have posted it ... I'm simply pointing out that this has been planned for ages (In other words, it's an "FYI.").

One particular aspect of the story I read about this last year, was that they announced their plans during one of those wonderful storms in Florida. Of course, at the time, it didn't do Florida any good, did it? (rhetorical question, BTW.)

In defense of meteorology (3)

lythander (21981) | more than 14 years ago | (#1359401)

The computer is a great step forward for NOAA. I used to work for them, my wife still does -- both trained meteorologists.

The whole "Sloppy physics" argument gives short shrift to the endeavor. The physics are very complex, and still not completely understood. They are also incredibly complex -- meteorology encompasses advanced physics and chemistry, along with the ungodly math that goes along with it. Only Theoretical physics will have more computers dedicated to it on the top 100 list of supercomputers.

Still, forecasting is as much art as science -- Truly good forecasters rely on intuition and experience to interpret output from several different models (both graphic and numeric) and put together a forecast. Statistical methods are also used to compare with similar events from the past. It is very easy to forecast -- it is extremely time consuming and difficult to forecast WELL.

Many TV stations' on-air people are not meteorlogists (in training or temperment)-- in fact many of the people on the weather channel are communications majors (at least they have a room full of metos telling them what's going on.)

The theoretical limits on forecast ability come from a number of factors. The reliability and the density of data points. There are relatively few datapoints for upper-air data (release a balloon, etc...)- on the order of a few per state - and those soundings happen only twice per day (except perhaps in extremely active severe weather environments). Even automated senesing stations are few and far between. Data then has to be interpolated for intermediate points and then stuffed into the model. Most models are then run on a 64-km grid and interpolated down. Finer mesh models (32 km ETA, et. al.) are being developed, but when all the models get run on the same machine, sacrifices in the name of efficiency must be made. Additionally, we still just don't know how it all works exactly. The effects of small scale things like the "heat-island" effect of large paved areas, pollution, solar activity, etc. are still being teased out.

Anyway, it's good to see them get the new machine (actually it wasn't so much the fire as the SPRINKLER SYSTEM that killed the old one). Give them a break. Their mission isn't to tell you what sort of coat to plan on for the morning, it's to save lives and property, and on that count, they do a hell of a job.

Re:I could have used that Cray :( (1)

Doctor Memory (6336) | more than 14 years ago | (#1359402)

Somebody with a big home! I looked into picking up a Cray-I at a Los Alamos garage sale, and found out that it would take some US$800/mo worth of 440 3-phase to feed it. Not to mention the pair of VAX-11/785s to run I/O for it...

Its Not The Size or Speed/Its Also The Granularity (2)

quakeaddict (94195) | more than 14 years ago | (#1359403)

In a previous life I was a meteorology graduate from Rutgers University (1988). While its nice that they have a bigger and better computer, unless and until thay have better input/initialization data to feed it, I can't see how the forecasts will get any better.

Twice a day (0Z and 12 Z) the main prediction models are initialized with data from all over the world. Not only surface data, but "upper air" data as well. Upper air data come from sparsely located stations that actually have the ability to send up and record data from weather balloons.

To give you an idea of how sparse these stations are, near my house in New Jersey (USA), the closest upper air observation sites are from nearby:
Albany NY
Pittsburgh, PA
Wallops Island Virgina.

Every gridpoint in between, no matter how many there are, is interpolated/guessed at as initialization for the various numerical models that depend on that data.

Click here for a complete list of NWS stations that are included in the national upper air data collection network [noaa.gov].

So while they might have the ability to have more gridpoints, and they can have the capability of modeling the interactions between more gridpoints, the initialization data is still the same. It seems to me that they also need to spend more money on getting more data.

I remember the Olympics in Atlanta. IBM setup a very sophisticated weather observing system that allowed the NWS to predict weather at each individual venue. They were able to do this because they had upper air data every 10 or so miles a *few* (i.e. more than twice) times each day.

Click here if you would like to see the current output of these models. [millenniumweather.com] This will lead to a whole set of links for the various models. Some sites are better than others (the Unisys site and the Uinversity of Wisconsin site are the best.

The current models of choice at the NWS are the ETA and AVN. The NGM is an older model they still run and is referenced in many of their discussions. They run it as an internal consistency check to make sure the other models did't get caught in a chaos loop somewhere.

it's about time (0)

Anonymous Coward | more than 14 years ago | (#1359404)

See i think this is a good thing. About a month ago I remember seeing a chart listing the most powerful computers in the world. The first 20 or so from the US belonged to the "defense" dept. while virtually every european country had their number one acting as a weather computer. The US' number one weather computer was ranked something below 50. It seems we had our priorities a little twisted on the super computer front...or maybe that's the way it's supposed to be, being the bullies we are.

Models aren't that hot (1)

spiralx (97066) | more than 14 years ago | (#1359405)

Actually due to calculational constraints I doubt the models themselves are anywhere near perfect. The physics of fluid dynamics involves vast reams of very messy non-linear differential equations IIRC and these require complex numerical methods to solve even approximately. Even if they had the data at every single point (impossible I know) the current models would still fail in the long-term / small scale.

Linux - nope (2)

gelfling (6534) | more than 14 years ago | (#1359406)

Yah - Linux runs on a few PPC RS/6000's but not on an SP since the real complexity is getting the interprocessor switch backplane to work. Remember that an SP is not a single system image machine. It is a group of processor complexes and for every processor complex you have another instance of AIX. Each SP is rack made up of processor complexes each one of which is analogous to an indivdual SMP RS/6000 and the switch backplane is what holds the whole shebang together. Each complex is called a node and can hold 2-12 (or even more possibly) PPC CPU's. A rack holds a bunch of nodes so if you have 12 CPU's in each node a 768 processor machine is made up of 64 nodes mounted in 8 racks or so depending on the packaging.

Re:What drives theoretical limit? (1)

paRcat (50146) | more than 14 years ago | (#1359407)

To put it simply, chaos.

You can never know the exact way certain air currents will flow. They mention being able to predict the movement of air systems the size of counties, and the way those interact with each other. Someone else mentioned butterflies... The wierd thing is, air movements that small could actually feed each other and merge into a large weather system.

AFAIK, chaos says that two identical events won't necessarily be the same. So we can't predict that two air molecules will react to each other the same way everytime. Maybe it's lack of information, or maybe it has to do with variations at a lower level. Whatever the case, you run into butterflies in Japan that create tornados in Oklahoma.

What about the models? (1)

Pyramid (57001) | more than 14 years ago | (#1359408)

I agree that this is one gnarly piece of big iron, but how much time and money is being devoted to the refinement of the computer models used forcast the weather? A weather forecaster friend mentioned in conversation that most of the models are pretty good at predicting summertime trends, but in the winter(the NGM and ETA especially), most fail miserably.

Here in the Ohio valley, weather is unpredictable enough, but winter weather is especially bizarre. It would be nice if the local forcasters had higher quality data and had to rely on their "gut feeling" less. It really sucks to wake up to a 1/2 inch of sleet frozen to the road when the forecaster assured the city we would only see light flurries!


If nothing else, imagine the babes you could get with that kind of computing power at your disposal. "Hey baby, wanna check out my gigaflop"?

Obligatory open source comment (2)

dsplat (73054) | more than 14 years ago | (#1359409)

Wouldn't it be cool if they would open source their forecasting code? It isn't like anyone is threatening to take over the job of actually doing the forecasts. Most of us don't have the computing horsepower. And who else has the up-to-date data sources? But I think some of us might take a look under the hood to see how it all works. And if they're lucky, they might get a couple of good patches that would get the a little more speed or fix a bug or two.

Re:Obligatory open source comment (1)

lythander (21981) | more than 14 years ago | (#1359410)

I don't think the code is available (falls under the category who would really care I think), but the models are all published. If I can find a link I'll post it.

Re:What drives theoretical limit? (0)

Anonymous Coward | more than 14 years ago | (#1359411)

Look for a huge increase in data points soon as more aircraft are equipped with automated weather reporting equipment.

Re:Heard this on CBS news yesterdayThe comput (1)

paRcat (50146) | more than 14 years ago | (#1359412)

Actually tornados are a bit different. Living in Oklahoma, I've seen my share. My brother's a storm chaser (tornados), and I know from him that it's still not terribly exact.

Doppler radar allows them to identify specific possible trouble spots, but that's about it. They can't know where it's going to go until they know where it is on the ground. That's a bit too late.

It looks like they're getting closer, but I think tornado systems are just too small to accurately predict.


Butterfly effect and the 2-week limit (2)

bolsh (135555) | more than 14 years ago | (#1359413)

Someone was asking why such a powerful machine can only manage to predict the weather up to two weeks in advance. The answer comes from chaos theory, and particularly the work of Lorenz in the 60s. The crux of the matter is that the weather is one of the many real-life systems where miniscule changes in initial conditions result in huge differences over a relatively short period of time. He nicknamed this the "Butterfly effect [jhu.edu]" (the name comes from the theory that a change in the initial conditions as small as a butterfly flapping its wings in Europe could result in a typhoon developing over India, or something like that).

Think of a simple system, like a round-bottomed bowl turned curved side up. Put a marble on the top and record its path. The get the marble and put it close to the original starting point. It could, if you're sufficiently close to the centre of the bowl, end up going in any direction. That's the butterfly effect.

Since measuring instruments can't measure every contributing factor to the weather (temperature, pressure, moisture, wind) to arbitrary levels at a sufficient number of points to form an accurate and complete initial condition from which to predict the weather, it'll go close for a while (the better the measurements, the closer), but within a couple of weeks the values just diverge.

If people are interested in reading a bit more about this stuff, there are a few good books of introduction, like "Chaos" by James Gleich or "Does God play dice?" by an author I can't remember. A good article as a lead-in is here [columbia.edu].

Dave Neary.

Re:What drives theoretical limit? (0)

Anonymous Coward | more than 14 years ago | (#1359414)

You are all missing the real point. The meteorological equations are non-linear and cannot be linearized without losing all the interesting properties of the original model.

Non-linearity implies that multiplications are done on variables in the equations. This implies, therefore, that errors also multiply and accumulate throughout the forecast. Not only that, but errors start out large in the initial fields.

After a sufficient length of forecast there is little resemblence between forecast and true fields. Some improvement can be made by increasing the resolution, but doubling the resolution in one dimension increases the computing power needed by 2 to the power of 3 or 4. A cynic could say that as far as weather forecasting goes, there is no such thing as a large, fast computer!

Re:Linux - nope (2)

spell (18829) | more than 14 years ago | (#1359415)

Well, I assume that they are using the new Power 3 High Nodes, which is presently an 8-Way SMP box. Okay, we all know (well those people who deal with SP and RS) that this will go up fairly shortly. Can an SP run linux, well, yes...I've seen it done. Absolutely no reason why it shouldn't. Unfortunately, there are no drivers, at present, for the switch, which is the clever bit. I guess someone might write some, I'd be surprised if someone didn't.

BTW, has anyone written linux drivers for the SSA adapters, either Intel or RS.

Re:What drives theoretical limit? (1)

m2 (5408) | more than 14 years ago | (#1359416)

AFAIK, chaos says that two identical events won't necessarily be the same.

Careful there. Chaotic systems are detherministic. Problem is they are extremely sensitive to initial conditions. In this case, depending on the model, initial conditions might involve knowing temperature, pressure, velocity, density, etc of very small chunks of air. It's not like you want to predict weather at room-level scales, but in order to accurately predict weather at global scales over long periods, you might need data at room-level scales.

As I said, butterflies...

Computer Science Technique Crossover (1)

crosseyedatnite (19044) | more than 14 years ago | (#1359417)

Often, when pondering for no reason, I wonder how many "state of the art" programming techniques that have existed in CS (fuzzy logic, neural networks, hello world in every language) have been utilized by the meteorlogical sciences people?



I'm not knocking their abilities for development of software to solve their problems, but I always come to a single issue: they are primarily meteorologists who have learned to program as opposed to program and who's primary focus is meteorology. What if they had an influx of people who's background is entirely programming? People who program because they focus on programming.


I believe that the two disciplines working together could better attack the problems at hand. As an industry we are happiest when we have a nice fat dataset to analyze that we can sink our algorithmic teeth into...

Computer Science Technique Crossover (1)

crosseyedatnite (19044) | more than 14 years ago | (#1359418)

Often, when pondering for no reason, I wonder how many "state of the art" programming techniques that have existed in CS (fuzzy logic, neural networks, hello world in every language) have been utilized by the meteorlogical sciences people?

I'm not knocking their abilities for development of software to solve their problems, but I always come to a single issue: they are primarily meteorologists who have learned to program as opposed to program and who's primary focus is meteorology. What if they had an influx of people who's background is entirely programming? People who program because they focus on programming.

I believe that the two disciplines working together could better attack the problems at hand. As an industry we are happiest when we have a nice fat dataset to analyze that we can sink our algorithmic teeth into...

Sprinkler system killed old supercomputer (1)

Bill the Cat (19523) | more than 14 years ago | (#1359419)

>>(actually it wasn't so much the fire as the >>SPRINKLER SYSTEM that killed the old one).

A $$gazillion invested in a Cray super computer, and they didn't spring for a Halon system to protect the room? Ouch!

Re:Its Not The Size or Speed/Its Also The Granular (1)

CharlieG (34950) | more than 14 years ago | (#1359420)

You missed Upton NY - Read Stony Brook, where the local NWS center is - the grounds of the Old Camp Upton - now known as Brookhaven National Labs

Re:Its Not The Size or Speed/Its Also The Granular (1)

quakeaddict (94195) | more than 14 years ago | (#1359421)

I thought there was one on Long island! I was looking for others in NY but didn't realize Upton was the one.

Re:What drives theoretical limit? (2)

Bryan Andersen (16514) | more than 14 years ago | (#1359422)

This is still a drop in the bucket. It will help with the higher altitude data collection, and pilot forcasts. To really increase the amount of data collected, what is needed is the Handy Home Weather Station that automatically reports it's position (GPS coordinates) and current weather data directly to the national weather service every few minutes. If they were cheep enough (less than $200) and were easily hooked up to ethernet or called a local or 800 number to report in. An ideal unit would be one that is fully self contained and mounted on a mast above the house, has a remote display, and can be linked into the home network.

Using CDE, no less. :-) (3)

devphil (51341) | more than 14 years ago | (#1359423)

There are TVs scattered through the hallways where I work, switching back and forth between CNN and an internal USAF news network. On the CNN report I just watched that covers this story, there's a brief snapshot of one of the NWS scientists hacking away at a workstation running CDE.

That's why there's a two-week limit to the forecasting times. After that, CDE has exhausted the swap space.

Re:Sprinkler system killed old supercomputer (2)

lythander (21981) | more than 14 years ago | (#1359424)

In fact, for a long time, it had none. Last year it was shut down/reduced to low availablility for quite some time when the crew installing sprinklers in the room found asbestos and it had to be quaratined while they cleaned up.

segmentation fault (0)

Anonymous Coward | more than 14 years ago | (#1359425)

did anyone see ABC news covering this story? they showed the monitor: right smack in the middle, "Segmentation Fault".

Re:Obligatory open source comment -models are avai (0)

Anonymous Coward | more than 14 years ago | (#1359426)

There are several models that you can download and build yourself. All of them should have sample input files in the tar files so you can run them as well. I don't have any benchmarks handy to tell you what type of performance you'll see on your machines. I do know of one benchmark for a cloud model that was about 84 megaflops on a single processor 450 MHz Pentium II w/ 512 Mb RAM and 512k cache (same benchmark on a J90 was 86 megaflops):

MM5: http://www.mmm.ucar.edu/mm5/mm 5-home.html [www.http]
This is the primary research model used in the met. community and is generally used for short range prediction (out to ~48 hours). Fairly easy to work with though getting all of your data set up can be a bit of a hassle.

ARPS: http://www.caps.ou.edu:80/ARPS/ [ou.edu]
The ARPS model is being worked on by the Center for Analysis and Prediction of Storms (CAPS) at the Univ. of Oklahoma. The goal of CAPS is to provide short term predictions of hazardous weather. Everything but the kitchen sink in the code. Not the fastest code out there for sure.

WRF: http://wrf.fsl.noaa.gov/ [noaa.gov]
The Weather Research and Forecasting (WRF) model is the next generation community model that is currently being developed. This model will be used both for research as well as operational forecasting. This is the successor to MM5. The NWS will begin to run this model operationally at some point once development gets far enough along.

The NWS also makes the source code available for the Eta model as well (try rooting around at the National Centers for Environmental Prediction web site [noaa.gov]. This version of the code will more than likely be the old version of the parallelized version of the code, not the new version that's been changed for the distributed nature of the SP2.

-mike

Aren't there limits on predictability of wx (2)

ch-chuck (9622) | more than 14 years ago | (#1359427)

I mean, 'chaos theory' largely came out of wx prediction, specifically the 'butterfly effect' where a very small change in the initial conditions can vary the outcome wildly - what I'm saying is isn't wx naturally pretty 'random and chaotic' (like life!) and that there are some kind of natural theoretical limitations on just how much CAN be predicted, like predicting the toss of a die, no matter how much cpu horsepower you have - kinda like an Uncertainty Principle of Meterology?

The Scarlet Pimpernel

That was redundant (2)

ch-chuck (9622) | more than 14 years ago | (#1359428)

just noticed someone already said the same damn thing :)
(read before posting? Naaaa!)

The Scarlet Pimpernel

Re:Computer Science Technique Crossover (1)

lythander (21981) | more than 14 years ago | (#1359429)

Nowadays many/most people in Meto are actually CS people. You can't do meto if you can't do CS.

The upper air stuff is usually more important... (1)

rlk (1089) | more than 14 years ago | (#1359430)

in part because the upper air network is so sparse (and so infrequently sampled) compared to the surface network. There are also all sorts of boundary layer and topographical effects that cause the surface to be not representative of the entire atmospheric column. This, of course, cuts both ways -- local microclimates can be very important.

The classic example is a deep, bowl-shaped valley -- on clear, calm nights it will typically be sharply colder than on surrounding hills. On nights with a light wind, if the air in the valley decouples from the light breeze aloft, the difference might be very sharp indeed.

However, all in all I think it would be more cost effective to upgrade the upper air network, particularly over the oceans and Asia.

To see what meteorologists really think... (1)

rlk (1089) | more than 14 years ago | (#1359431)

check out iwin.nws.noaa.gov. I use the text interface; look under State Data, and then under most of the states you'll find Forecast Discussion. Depending upon who's on duty and how interesting the weather is, you'll get anything from "Will continue current forecast" to a long discussion of factors influencing the weather and local effects. Walt Drag of the Boston (actually Taunton), MA office is legendary for his discussions, which recently have sometimes filled two full screens. He's clearly a big-time snow buff.

http://iwin.nws.noaa.gov/iwin/ma/discussion.html is the current and last several discussions; they're usually updated about every 6 hours (again, except for Walt, who likes reissuing them for changing conditions). The current one (as of noon EST on Wednesday, January 19) is really juicy. Assuming he's still on duty this afternoon (I'm not certain how the shifts run), his afternoon discussion will be even better. It's interesting to read, to see how these folks interpret the data and the forecast models.

Re:Computer Science Technique Crossover (1)

mesocyclone (80188) | more than 14 years ago | (#1359432)

The models used in meteorology have no sophisticated techniques other than pure numerical analysis. They are based on fundamental physics equations and do not have significant heuristics or neural nets.

They are finite element simulators. The atmosphere is divided into 3 dimensional sections of a certain (fairly low) resolution (32km is the best I know of for a synoptic model). Input data is way too sparse, so they are initialized (called analysis in meteorological terms) with that data, interpolations, etc. The equations of fluid dynamics, thermodynamics, radiation, etc are evaluated for each cell (and its neighbor interactions) producing a new state. This is iterated.

While chaos puts upper limits on accurate forecasts, there are far more mundane issues that cause the most problems. Lack of data is the worst - especially upper air and oceanic data. Upper air data is sampled over the US at 400km grid points twice a day. Samples over the ocean are missing. Additional upper air data is provided from instruments on commercial aircraft, which telemeter that data for flight operations. Satellite remote sensing can provide some missing data, but it is very hard to get accurate data on different layers of the atmosphere itself - especially from geosynchronous orbit, which is the only place that has continuous wide-area coverage.

You can see the satellite derived examples at url's like:

http://orbit-net.nesdis.noaa.gov/goes/soundings/ gifs/caspac.gif
http://orbit-net.nesdis.noaa.gov/goes/soundings/ skewt/gifs/kphx.gif
http://orbit-net.nesdis.noaa.gov/goes/winds/gifs /trwnd96.gif
http://cimss.ssec.wisc.edu/goes/realtime/latests npw.gif

Additional problems are caused by lack of computational power. Even if fine grained data was available, the models require relatively large grid sizes in order to be able to compute the weather faster than it is actually happening!

Models also are poor in handling terrain, often using "cheats" in parameters rather than physics and fine grained modelling to adapt. Terrain has significant effects on the weather in much of the west, and those effects extend to the larger scale systems.

Also, long range models must be global in scope, and the difficulties I have described in US data are much more severe in many other parts of the world.

The new computer was touted as being able to forecast weather down to the "county" size up to 14 days. That is completely absurd, given the difficulties cited above. In fact, small scale forecasting (meso-scale) does not exist in the forecast models (although there are small scale models used for understanding as opposed to forecasting). Forecasting of certain kinds of severe weather is very difficult, and today has many unknowns about the mechanisms. For example, when the national doppler system was set up, it was believed that almost all tornados came from supercell thunderstorms, and that most supercells with particular characteristics produce tornados. As it turns out, this is wrong. Something like 50% of tornados (higher percentage of damaging tornados) are produced by classical supercells, but most supercells do not produce tornados. Thus today there is a high false warning rate on radar-generated tornado warnings (one reason that spotters are still very important), and also a significant false negative rate (missed warnings).

Recent research indicates that small scale atmospheric features at or near the surface have significant impact on tornadogenesis, and they cannot be seen (or seen well) by the radar or picked up by the normal data networks.

Finally, to illustrate the difficulty in forecasting truly severe weather, consider the deadly Oklahoma tornado outbreak of May 3, 1999. It was not apparent that the risk of tornados was high until that day, and the level that the outbreak would generate was not apparent until just a few hours before the event. Models were able to show that morning that there was a potential for tornados, but it took real time analysis of real time data by meteorologists in order to do the actual forecast.
Disclaimer: I am not a meteorologist, but I am a tornado chaser who pays attention to what the real experts are saying and doing.

Re:Great. (1)

emechler (140181) | more than 14 years ago | (#1359433)

Using very complicated differential equations, it's theoretically possible for humans to accurately predict the weather to within 99% across the entire world for any given day. However, at current computer speeds it takes nearly 7 days to do said calculations. Hopefully in the next 10-15 years we can get some _really_ fast computers to do this, and then we won't miss the weather forecast at all.

Re:Computer Science Technique Crossover (1)

Troy Baer (1395) | more than 14 years ago | (#1359434)

Often, when pondering for no reason, I wonder how many "state of the art" programming techniques that have existed in CS (fuzzy logic, neural networks, hello world in every language) have been utilized by the meteorlogical sciences people?

Probably not many. This is not necessarily a bad thing; many "state of the art" programming techniques result in lousy performance, and part of the point of the weather simulations is to get things out as quickly as possible.

I'm not knocking their abilities for development of software to solve their problems, but I always come to a single issue: they are primarily meteorologists who have learned to program as opposed to program and who's primary focus is meteorology. What if they had an influx of people who's background is entirely programming? People who program because they focus on programming.

The problem with this is that people "who program because they focus on programming" usually don't have much background in the underlying physics that the weather models simulate. These weather models are essentially fluid dynamics simulations: big, nasty, coupled sets of nonlinear partial differential equations that have been approximated in some way to make them solvable. Most of the CS folks I know simply don't know enough about either the physics or the math needed to approximate the physics -- it's not something they're normally exposed to.

These models are typically written in Fortran -- not because the meteorology people are computing troglodytes, but because Fortran is still the best option for scientific computing. The issues for generating optimized Fortran code are very well understood; C and C++ are much more difficult because of all the pointer chasing. There's also a huge body of scientific libraries for Fortran that C and C++ simply don't have by virtue of not being around as long.

Now, it looks like I'm bashing CS people. I'm not, and in fact there is room for a lot of work from CS folks on front-end integration stuff. Here's what I mean: There are on the order of a half dozen model codes the NWS uses for forecasting. Each one generates Lord only knows how much data per run. Correlating all this data and presenting it in a cogent, easily understandable format (for the expert, not necessarily the layman) is something the scientific computing community in general really needs the CS folks for. Another thing CS faculty could do for the scientific computing community is teach more about code optimization methods for modern cached-based architectures (taking advantage of data locality for cache reuse, minimizing memory references, etc.). These topics usually aren't even touched upon in a CS curriculum except possibly a graduate level high performance computing class, and they really should be discussed in the introductory computer architecture classes.

--Troy

Re:"Does God play dice?" author (1)

Randym (25779) | more than 14 years ago | (#1359435)

"Does God play dice?: The mathematics of chaos" by Ian Stewart (1990)

Re:To see what meteorologists really think... (2)

Windigo The Feral (N (6107) | more than 14 years ago | (#1359436)

Rlk dun said:

check out iwin.nws.noaa.gov. I use the text interface; look under State Data, and then under most of the states you'll find Forecast Discussion. Depending upon who's on duty and how interesting the weather is, you'll get anything from "Will continue current forecast" to a long discussion of factors influencing the weather and local effects. Walt Drag of the Boston (actually Taunton), MA office is legendary for his discussions, which recently have sometimes filled two full screens. He's clearly a big-time snow buff.

Another meterologist who consistently puts out educational (and damned funny!) forecast discussions is "I-Sixtyfive" down at the Birmingham, AL WFO. (His specialty is severe weather as well as taking a decided sense of humour--among other things, his forecast discussions have at times imitated hit parades, at one point actually spoofed Star Wars...it's obvious he has quite a bit of fun at his job :)

Seriously, though...reading forecast discussions is a good way to at least start to learn about meterology...you learn what forecast models are used, how they get the data, what makes a good or bad forecast, etc. (And just to note--a lot of forecasting ain't computer models so much as someone who's been in the area for years and knows how the weather patterns work. I've seen this many a time with the SDF (Louisville) NWSFO--Ohio Valley weather is living proof that weather is chaotic ;) and many times what makes or breaks a forecast is being familiar with how weather tends to "behave" (or, rather, misbehave) in the area.)

Re:What drives theoretical limit? (4)

Windigo The Feral (N (6107) | more than 14 years ago | (#1359437)

Acfoo dun said:

The article mentions a 14-day theoretical limit for forecasts. What drives this limit? I know that small-scale weather forcasting is way too complex, and they are talking about county-level forecasts.

Short Answer: because weather is chaotic.

Long Answer That Probably Tells You More Than You Wanted To Know :) :

Weather systems were, oddly enough, the first systems proven to have sensitive dependence on initial conditions (the defining characteristic of a nonlinear system). A person by the name of Lorenz discovered this in early attempts to model weather systems in the late 50's/early 60's, and more and more complicated weather simulation systems have proven it even more. (As a minor aside: Both the strange attractor associated with Lorenz's discovery and the effect of it in RL are known as the "Butterfly Effect"; the attractor looks much like a butterfly, and sensitive dependence on initial conditions may be summed up roughly as "A butterfly flapping its wings in Brazil may cause enough perturbation in the atmosphere to cause a tornado in Texas a few days down the line". That's also why the one fella in this thread keeps mentioning butterflies, btw. :)

As it turns out, a large number of the variables in weather forecasting are nonlinear. Not only that, in many areas you have multiple weather influences that can brew up storms in a jiffy and make them dissipate almost as quickly (the Ohio Valley--which lies on an eastern extension of Tornado Alley--is infamous for this: cold air rushes towards us from Canada, warm air from the Gulf which is nice and moist, the jet stream frequently puts enough twist in the air, and we usually catch stuff from the "big" Tornado Alley out west...all this together means storms can brew up with amazing speed and fury out here ["popcorn" tornadic thunderstorms aren't unknown here--we had some pop up January 3-4th, which proceeded to spawn an F3 tornado which hoovered a fair portion of the city of Owensboro, and the 1995 tornado that hit the Mt. Washington area [south of Louisville] brewed up about that quick--not the tornado, the supercell that spawned it...); also, fun statistic: Kentucky is not in the top ten for tornados per state, but IS in the top ten for killer tornados per state and killer tornados per square mile--in other words, we don't get them as often as Oklahoma, but when we get them they tend to be F2-F3 and up)...to the point we joke that Ohio Valley weather is more chaotic than chaos itself :)

To be honest, I'd say fourteen days is DAMNED optimistic. I have never seen a forecast in my area that was more than three or four days old that was anything close to being accurate (of course, I live in the Ohio Valley, which has weather systems that make chaosologists cream, meterologists scream, and put the fear of God into storm chasers to the point they state they think the Ohio Valley is entirely too dangerous to chase tornados in :)...usually in Knoxville, TN I've found the forecasts more accurate because they don't have to deal with as much crap variable-wise). Even in relatively calm areas five or six days is REALLY stretching it...I honestly don't think meterology is going to be able to improve much on that. You might get more detail (a better idea of where snow will fall and maybe how much), but you aren't going to get any closer to longterm forecasts, and in the spring in the Ohio Valley you'll be lucky sometimes to get one or two days in during storm season. (Hell, I'll be impressed if they can actually determine accurate amounts of snowfall. I have NEVER, EVER in my 26 years seen an accurate snowfall amount forecast; hell, 50% of the time they can't even tell if it's going to be snow, freezing rain, or sleet...and this is with folks in the NWSFO who have been there longer than I've been alive, and with the most experienced TV meterologist having done weather here for some thirty years (to the point he helps out the NWSFO at times) and who also teaches the advanced meterology classes at U of L...and all of them completely and utterly unable to tell how much snow one will get. All of them missed the 1994 super-snowstorm (24 inches in one snowstorm in Louisville, which is a record--the city, for which four or five inches starts to be a "big snow", was literally paralysed for a month and the only way one could get anywhere was by 4WD...the city and county governments were literally commissioning folks with high-clearance 4WD vehicles to transport folks to the hospitals and suchlike)... like I said, I'll be IMPRESSED if they can get to the level of predicting how much snow will fall, much less long-term forecasting. :)

And as for experience with weather modeling...most of mine is in using them for my own attempts at forecasting (I'm just a wee bit of a weather nut, to the point I'm seriously considering taking Skywarn classes and maybe even meterology courses in future, though I'm NOT up to chasing tornados just yet :). If you've been in an area for some time you learn which models work best (there are actually several different models, such as the NGM, the AVN, the ETA, etc.) for the way the weather actually behaves in your area...the most important computer in any forecast is the big meaty two-pound one between yer ears :)

Re:What about the models? (2)

Windigo The Feral (N (6107) | more than 14 years ago | (#1359438)

Pyramid dun said:

Here in the Ohio valley, weather is unpredictable enough, but winter weather is especially bizarre. It would be nice if the local forcasters had higher quality data and had to rely on their "gut feeling" less. It really sucks to wake up to a 1/2 inch of sleet frozen to the road when the forecaster assured the city we would only see light flurries!

That's because Ohio Valley weather is more chaotic than chaos itself. :) We don't even need the damn butterfly--all we need is Tom Wills[*] to blow his nose, and within the span of a week we will get F6 tornados, blizzards that dump two feet of snow and six inches of ice, AND a flood to boot--and nobody will see it coming till it brews up on top of them. ;)

Yes, I'm exaggerating. Not by much, though--to give an example of wonderful Ohio Valley winter weather--a few days before Christmas we get five inches of snow where maybe a "light dusting" was predicted. January 3rd, we get tornadoes because the weather is unusually springlike. Yesterday, snow was predicted but we got freezing rain and sleet instead. We are supposed to get freezing rain and snow tonight, but I will not be one bit surprised to wake up tomorrow to see a foot of snow on the ground and the city of Louisville entirely shut down because people do not know how to manage more than four inches of snow at a time. :)

As a minor aside--I remember reading that, largely because standard models cannot predict Ohio Valley winter weather worth a damn, the Louisville NWSFO is working on a new weather model specifically meant to predict Ohio Valley snowstorms...I wish them good luck, especially knowing our weird and wonderful weather (don't like it, wait fifteen minutes...it'll change...it might put the fear of God into you in the process, but it'll change, trust me :)

yeah, but... (1)

JackiePatti (115651) | more than 14 years ago | (#1359439)

...the new 786 processor IBM SP computer located in Bowie, Md.

Five times faster than the Cray C-90 it replaces, the new IBM can make 690 billion calculations per second. By September it will be speeded up to 2.5 trillion calculations per second... Yeah, it sounds sort c00l, but can it run Linux? ;)

Re:Its Not The Size or Speed/Its Also The Granular (1)

coats (1068) | more than 14 years ago | (#1359440)

While its nice that they have a bigger and better computer unless and until thay have better input/initialization data to feed it, I can't see how the forecasts will get any better.

Twice a day (0Z and 12 Z) the main prediction models are initialized with data from all over the world. Not only surface data, but "upper air" data as well. Upper air data come from sparsely located stations that actually have the ability to send up and record data from weather balloons.

Exactly right. On the other hand, there is recent work which may greatly increase the amount of upper-air data available: there is a program to put up a fleet of satellites to measure carefully the behavior of GPS satellite signals as they pass through the atmosphere as the edge of the earth goes "between" the GPS satellite and the receiving satellite, and use the diffraction profile observed as the GPS satellite goes "over the horizon" to derive density profiles (and potentially wind profiles, via interferometry) very densely over the entire globe. This presents an incredible opportunity to solve the atmospheric-data sparseness problem.

On the other hand, another major deficiency in today's meteorological models is getting the land-surface right (e.g., just how wet is the soil? and how much of the sun's incoming energy shows up as conducted heat, and how much serves to evaporate water?) Present models aren't very good at this; we're working on it here at NCSC (see URL http://envpro.ncsc.org/projects/dashmm [ncsc.org] but it winds up very computational -- you have to use resolutions well under 1 KM in order to get the terrain-slope/drainage effects right. And you need to do that globally! (Satellite data aren't very useful; statellite radar generally doesn't penetrate beyond about 1 cm. And there's no funding to put up long-wave-radar interferometry equipment that might do better.)

Re:What about the models? (1)

coats (1068) | more than 14 years ago | (#1359441)

Here in the Ohio valley, weather is unpredictable enough, but winter weather is especially bizarre. It would be nice if the local forcasters had higher quality data and had to rely on their "gut feeling" less. It really sucks to wake up to a 1/2 inch of sleet frozen to the road when the forecaster assured the city we would only see light flurries!
Three problems:
  1. ice microphysics: It is really hard to simulate "freezing", given the usual pattern of super-cooled water droplets that suddenly decide (in a chaotic, nonlinear way) just when they are going to freeze.

  2. land surface: the present models don't do a sophisticated-enough job of this, and the resulting transformation of input solar radiation (etc.) into driving forces for the weather is not modeled well enough.

  3. spatial resolution: and the computational cost goes up inversely with the fourth power of the grid-cell size.

Re:Obligatory open source comment (0)

Anonymous Coward | more than 14 years ago | (#1359442)

We also need an open source fortran90 compiler.

Re:In defense of meteorology (1)

Zurk (37028) | more than 14 years ago | (#1359443)

not the sprinkler system. the fire department responded to the fire which started inside the C90 and they put it out with dry extinguishers which wrecked it,.

Wonder who's behind THIS one! (1)

fleckster (136683) | more than 14 years ago | (#1359444)

And who's going to be uploading this weather information ahead of time? Why, you guessed it! Good old Bill Gates! That's right, we all know he controls the weather! (By the way Bill, thanks for all the snow that's comin to us down here in the mid-east! Woohoo!)

Bring on the karma-toilet if you think this isn't funny! hehehehe!

Re:What drives theoretical limit? (1)

Skim123 (3322) | more than 14 years ago | (#1359445)

I had always heard that they are about 20% off on their predictions each day from the current. So, if they say, "there is a 50% chance of rain tomorrow," that really means there is an 80% chance that there is a 50% chance that it will rain tomorrow. Two days ahead is a 60%, then 40%, and so on.

Use Distributed.net to predict the weather (1)

Skim123 (3322) | more than 14 years ago | (#1359446)

Why not use distributed.net to help predict the weather? 786 processors... ha! We could have so many more...

Re:Weather computer's predictions (0)

Anonymous Coward | more than 14 years ago | (#1359447)

distributed clients like Seti@home and d.net only work well for trivially parallel problems. Weather models are not, so transmission latency and lack of bandwidth would kill performance.

This is one of those applications where dedicated supercomputers beat a beowulf cluster handily.
Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...