Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Earth Government Supercomputing Hardware

NWS Announces Big Computer Upgrade 161

riverat1 writes "After being embarrassed when the Europeans did a better job forecasting Sandy than the National Weather Service Congress allocated $25 million ($23.7 after sequestration) in the Sandy relief bill for upgrades to forecasting and supercomputer resources. The NWS announced that their main forecasting computer will be upgraded from the current 213 TeraFlops to 2,600 TFlops by fiscal year 2015, over a twelve-fold increase. The upgrade is expected to increase the horizontal grid scale by a factor of 3 allowing more precise forecasting of local features of weather. The some of the allocated funds will also be used to hire some contract scientists to improve the forecast model physics and enhance the collection and assimilation of data."
This discussion has been archived. No new comments can be posted.

NWS Announces Big Computer Upgrade

Comments Filter:
  • by Anonymous Coward on Monday May 20, 2013 @08:15AM (#43772481)

    maybe it would pay for the upgrade.

  • Precise garbage (Score:2, Insightful)

    by Anonymous Coward

    Well unless we're in a butterfly wing effect situation, you'll generate 3 times the amount of garbage.

    I think this is more a subsidy to the troubled supercomputing market disguised as a technical improvement.

  • so.. (Score:5, Funny)

    by msauve ( 701917 ) on Monday May 20, 2013 @08:34AM (#43772537)
    Why not just pay attention to the European forecasts, which would cost nothing?
    • Re:so.. (Score:5, Informative)

      by BRock97 ( 17460 ) on Monday May 20, 2013 @08:52AM (#43772623) Homepage

      Why not just pay attention to the European forecasts, which would cost nothing?

      Actually, the NWS pays a great deal of money to see the ECMWF (the European model of choice) and are required to encrypt it before it is sent out to the various forecast offices over their NOAAPort system.

    • Re:so.. (Score:4, Insightful)

      by Teun ( 17872 ) on Monday May 20, 2013 @09:49AM (#43772969)
      Because, as others already explained, the NWS is already using the EU generated data and vice versa.

      Much more important is for something this important you'd like to have more than one model, if only to check against.

  • by Anonymous Coward

    Looks like they'll have to shut down most essential weather services. The People have to be taught a lesson about telling government to spend less money.

  • by account_deleted ( 4530225 ) on Monday May 20, 2013 @08:39AM (#43772563)
    Comment removed based on user account deletion
    • The NWS doesn't need a faster supercomputer. The current one can pump out bad results based on a flawed set of algorithms at a perfectly useable rate. What the current computer can't do is act as an East Coast regional processing point for THIS [slashdot.org].
  • Not that the 5 people in that field have trouble getting jobs anyway. But if you like math and the ocean, it's a good field to go into.

  • by fygment ( 444210 ) on Monday May 20, 2013 @08:52AM (#43772617)

    ... the Europeans did a better job forecasting Hurricane Sandy. Oh. Didn't know that. But hey when they make a movie of it, I'm sure they will present as fact that the American system was the most awesome thing and NWS was right on the money with typical awesome American ingenuity .... sorry, 'Argo' flashback.

    • To the same end, on historical analysis there has been one model that has held true more often than it has not.

      The weather tomorrow will be exactly the same as today (+/- 1%)

      Sure it is a meteorological and mathematical joke, you can't argue with the results.
  • Hopefully this new system will be able to calculate in Celsius instead of Fahrenheit...

    • The model physics core measures temperature in kelvins [wikipedia.org], you temperature-insensitive clod!
  • by Chrisq ( 894406 ) on Monday May 20, 2013 @08:55AM (#43772641)
    It appears that the computers that Europe was using for the "better forecast" were not as powerful [ecmwf.int] as the [metoffice.gov.uk] old system being replaced. Upgrading because Europe's forecast better would be like taking a slow route to a holiday destination then buying a Porsche because your neighbours got there sooner when all you need is a new roadmap.
    • by Chrisq ( 894406 ) on Monday May 20, 2013 @09:02AM (#43772681)
      Also, though I would like to believe that Europeans have superior algorithms, realistically the hurricane prediction could be a "one off". We know that modeling weather can gibe widely different results based on small variations of starting conditions, assumptions, etc. Unless there is evidence that European forecasts are consistently better it could just be luck. With the known chaotic nature of storm systems it wouldn't surprise me if the "butterfly effect" of the rounding errors when converting from C to F would be enough to displace a storm by hundreds of miles!
      • by Anonymous Coward

        To detect butterfly effects they bracket the scenarios with a small delta and see if it swings off chaotically. If that happens, then they know they can't make a realistic prediction, because the sensors they have don't permit it. Adding more computing power doesn't fix anything then.

        No, it was simply a little better algorithm run on a computer a tiny fraction of the processing power of the *current* US supercomputer. I notice there's a lot of government money going into supercomputers as the PC market drie

      • by Shinobi ( 19308 ) on Monday May 20, 2013 @09:20AM (#43772803)

        It's not just once. Several hurricanes and other severe weather systems have been most accurately predicted by the European model. In fact, if you read some of the links in the article, you'll see references to that.

      • Re: (Score:3, Informative)

        With the known chaotic nature of storm systems it wouldn't surprise me if the "butterfly effect" of the rounding errors when converting from C to F would be enough to displace a storm by hundreds of miles!

        Absolutely not the case. First, all non-trivial computational fluid dynamics codes (e.g. those used for weather prediction) use non-dimensionalized values in the governing equations. You're not solving with C vs F (you'd never use either anyway, but absolute kelvin vs rankine), or meters vs feet, but non-dimensional numbers which are only converted back to dimensional ones after all the heavy computation is done.

        Secondly, even if one were to use dimensional values in solving the equations, the round off err

      • Read the "Scientific Weather Discussion" in Weather Underground forecasts. More often than not, they find that the ECMWF forecast fits the data better. Been that way for years.

  • by macwhizkid ( 864124 ) on Monday May 20, 2013 @08:58AM (#43772657)

    It's kind of astonishing how little we (by which I mean the U.S.A.) spend on weather forecasting relative to the economic effects. The economic costs of weather are in the hundreds of billions of dollars annually. You can't change the weather, but more accurate predictions will save more lives and property.

    I try not to plan my life around the weather, but a few million to possibly offset billions in damage from an incorrect hurricane path prediction is a no-brainer.

    • Where did you find information on the USA's spending on weather forecasting? Is it really that much lower than that of the European countries?

      People seem to see all the embarrassment behind the fact that the European weather forecasting system is so much better, but Europe consists of 50 countries with a total population of 750 million. I don't know how many of those countries put into that weather system funding pot, but I'll betcha its most of them.

      The fact that our system, from one country with half the

  • I'm a bit confused ... why is so much money being spent if the technology already exists elsewhere? What about remote computing? Why can't we share resources? A 2.6TFlop super computer had better last us a long time. I can't imagine what the "1.21 Gigawatt" power bill will look like.
  • by some old guy ( 674482 ) on Monday May 20, 2013 @09:10AM (#43772747)

    Here's a great chance to jump in on another multi-billion dollar government tech boondoggle. Why let SAIC and the other Beltway Bandits scarf up all the big bucks? A bunch of us ought to slap a shell company together and bid like there's no tomorrow. Get on board that gravy train while we can!

    If this goes anything like recent FAA, USPS, and VA projects to name but a few, a successful contractor can bill for years while never delivering a finished, operational product.

    Surely we can spec a 2.6K TFlop monster, with ancilliary systems, and market it to the GSA purse-holders. Easy math. Calculate the probable actual cost (fair bid price), triple it (IBM, Kray, or SAIC's price), and multiply by .9 = winning bid (never bid too low on a government contact; they automatically chuck out the highest and lowest).

    What could possibly go wrong?

  • by WGFCrafty ( 1062506 ) on Monday May 20, 2013 @09:15AM (#43772779)

    You don't need a weather man to know which way the wind blows.

    Just a 2.6 pflop computer.

  • We need the focus back on public schools, affordable college education, respect for science, and good learning for EVERYONE who wants a good education.
  • This is how Government funding works. I was at a workshop on the then-new field of space weather forecasting in the mid 1990s where the keynote address was given by Dr. Joe Friday, at the time the head of the NWS. He pointed out that we would see no serious funding from Congress until there was the space-weather equivalent of a train wreck that kills many voters, or costs the monied interests lots of dinero. (Joe later lost his job when a non-forecastable flood in the mid-west that exceeded the 100-year

    • This. It's a bit extreme but it does happen. Seen it locally viz. stoplights and flood control remediation.

  • by Anonymous Coward

    Posting anon to avoid burning bridges. NCAR has tried to develop better forecast models but they've layed off experienced US staff to hire foreign H1B grad students to write their software. I lost my 18+ yr position as a software engineer at NCAR, while helping to replace the 1980's crap they use to verify the accuracy their models with modern software, using modern techniques . They have great hardware but very amateur software. I got a "we've lost funding for you" while they were hiring H1B's. I was of

  • Somehow it was necessary to mention that the budget was affected by sequestration?

  • by Miamicanes ( 730264 ) on Monday May 20, 2013 @10:45AM (#43773349)

    Supercomputing improvements are nice, but I personally want to see them get the cash to profoundly increase their NEXRAD backhaul (the data lines connecting their radar sites to the outside world).

    Right now, they're HORRIBLY backhaul-constrained. I believe most/all NEXRAD sites only have 256kbps frame relay to upload raw data to NOAA's datacenter for further processing & distribution to end users. As a result, they're forced to throw away data at the radar site to trim it down to size, and send it via UDP with little/no modern forward error correction. That's a major reason why glitches are common. In theory, the full-resolution data is archived to tape on site and CAN be mailed in if some major weather event happens that might merit future study, but the majority of collected data gets archived to tape, then unceremoniously overwritten a few days later. And most of the tapes that DO get sent in sit in storage for weeks or months before finally getting added to their near-line data archive.

    The low backhaul bandwidth is made worse by the fact that the secondary radar products (level 3 radar, plus the derived products like TVS) get derived on site, and wedged into the SAME bandwidth-constrained data stream. That's part of the reason why level 3 data lags by 6-15 minutes... they send the raw level 2 data, and interleave the previous scan's level 3 data into the bandwidth that's left over. I believe the situation with TDWR sites is even worse... I think THEY actually have a single ISDN line, which is why level 2 data from them isn't available to the public at all.

    As I understand it, they can't use lossless compression for two reasons -- since they have no error correction for the UDP stream, a glitch would take out a MUCH bigger chunk of data (possibly ruining the remainder of the tilt's data), and the error correction would defeat the size savings from the compression. Apparently, the processors at the site are pretty slow (by modern computer standards), so it would also add significant delay to getting the data out. When you're tracking a tornado running across the countryside at 50-60mph, 30 seconds matters.

    If NWS had funding to increase their backhaul to at least T-1 speeds, they could also tweak their scan strategies a bit to make them more useful to others. For example, they could do more frequent tilt-1 scans (the lowest level, which is the one that usually affects people the most directly), and almost immediately upgrade all current NEXRAD sites to have 1-minute updates for tilt 1 (adding about a minute to the time it takes to do a full volume scan, but putting data more immediately useful to end users out much more frequently).

    Going a step further, more bandwidth would open the door to a fairly cheap upgrade to the radar arrays themselves... they could mount a second antenna back-to-back with the current one with fixed tilt (ideally at 10cm, like the main one, but possibly 5cm like TWDR if 10cm spectrum isn't available, or a second dish of the proper size for 10cm wouldn't fit), and do some moderate hardware and software tweaks that would effectively increase their tilt-1 scanrate to one every 6-10 seconds (because every full rotation of the main antenna would give them a full tilt-1 rotation off the back). This means they could send out raw tilt-1 data with 6-10 second frequency. It's not quite realtime, but it would be a HUGE improvement over what we have now.

    Unfortunately, NWS has lots of bureaucracy, and a slow funding pipeline. I think it's safe to say that the explosion in popularity of personal radar apps, combined with mobile broadband, almost totally caught them by surprise. Ten years ago, very few people outside NWS were calling for large-scale NEXRAD upgrades. Now, with abundant Android and IOS apps & 5mbps+ mobile data the norm, demand is surging.

    That said, I hope they DON'T squander a chunk of cash on public datafeed bandwidth instead of upgrading their backhaul. I'd rather see them do the back-end upgrades that only THEY can do, and tell people who want reliable & frequent upgrades to get their data feed through a private mirror service (like allisonhouse or caprockweather) who can upgrade their own backhaul as needed, instead of having to put in funding requests years in advance.

    • by JTinMSP ( 136923 )
      One key thing you missed. The NWS 88-D Radar system *can* take a scan every minute at the expense of resolution and distancee. A "full" scan across the commonly used tilts takes *six* minutes. You can have a OC-3 to every radar site, but you're only going to get data ever six minutes most of the time.
      • by Miamicanes ( 730264 ) on Monday May 20, 2013 @02:03PM (#43775107)

        You're mostly right, but you're overlooking the software limits that exist mainly due to the limited bandwidth. If they upgraded the sites to a full T1 and tweaked the software a bit, they could give us new tilt-1 updates every minute, with about 15-60 seconds of radar-to-end-user latency, without major hardware upgrades besides the T1 interface itself.

        Compare that to now, where we get only a single tilt-1 scan every 6 minutes, and that scan might itself be delayed by another 6-10 minutes on top of that. There are ALREADY several VCP programs that sample tilt 1 every minute... they just can't send out that data, and only use it locally for calculating their derived products, because they don't currently have the dedicated bandwidth to send it out.

        Remember, WSR88D is kind of like an Atari 2600... it has very few limits that are truly "hard" and insurmountable. Rather, they're software-imposed in recognition of other limiting factors like backhaul bandwidth, or are precautionary limits imposed to guarantee that some specific product can always be fully-derived and delivered within some specific amount of time, or in a way that won't be destroyed by random errors. Many of them could be substantially improved with even minor hardware upgrades in other areas.

        There are real limits to resolution imposed by scattering, wavelength, and particle size, but from what I've read, the current level 2 scan data is still throwing away about 30-50% of the nominal max resolution, and enormous amounts of theoretical resolution that could be recovered through oversampling. At this point, NWS doesn't even *know* what they could derive offsite from oversampled level 2 data, because they've never had the backhaul resources to even *fantasize* about streaming it in its full oversampled glory, or even archiving it all on site. 20 years ago, the idea of having 64 terabytes of on-site raid storage for Amazon/Google-like raw indiscriminate archiving would have been unthinkable, and never even entered into the equation.

        The current scan rates are a compromise that tries to balance their backhaul against the need to track fast-moving storms like tornadoes. If they mounted a second, fixed-tilt dish back to back with the current dish so that every rotation produced a tilt-1 sample, they could alternate the back-facing samples between slow and fast pulse rates (so every other scan would be alternately optimized for range or resolution), and dedicate the front-facing dish currently in place to sampling the higher tilts (interleaving them to sample lower tilts twice at both PRF rates). Freed of the need to dedicate at least two full sweeps out of each volume scan to tilt 1 (because the back-facing antenna would sample tilt one every time the dish rotated), they could possibly slow down the rotation rate and use it to increase the resolution.

        The closest thing I've seen to my idea was a paper someone at NOAA wrote about a year or two ago, proposing a compromise between fixed-tilt back-to-back conventional radar, and full-blown (and likely to be cost-prohibitive) phased-array radar 360-degree fixed radar. Basically, their idea was to build a limited wedge of PAR modules capable of sampling 4 tilts over ~1 degree horizontal, and mount it to the back side of the existing dish assembly, so that it could sample 4 tilts per revolution, and give us the equivalent resolution of 4-tilt level 3 TDWR every 12-15 seconds. The idea is that NOAA would then have a TDWR-resolution rapidly-updating radar source for tracking fast-moving/rapidly-developing storms off the back, and could slow down the overall rotation to get more detailed ultra-hires samples than we have now off the front dish.

        The catch, from what I recall, was that they'd HAVE to decrease the RPM, and use 5.8GHz (like TDWR) for the rear array, because there just isn't enough C-band 10cm spectrum available to simultaneously broadcast 5 pulse beams without creating an interference scenario that would make their current range-folding issues look downright tame. They'd

    • Wow. Informative as all get out. Thanks.

  • by PineHall ( 206441 ) on Monday May 20, 2013 @11:08AM (#43773539)
    Cliff Mass, University of Washington Atmospheric Sciences Professor, has been arguing for an upgrade [blogspot.com] for a long time. He sees great potential [blogspot.com] for this new system if used right. The reasons [blogspot.com] for the upgrade boil down to having "huge economic and safety benefits" with better forecasting, and he says these benefits are within our reach.
  • Which big government contractor needs work now? That seems to drive these projects more than actual need. I'm guessing the NWS/NOAA has plenty of computing resources, just need to fine tune the models a little bit and collaborate techniques with the Europeans...
  • The accuracy measurements in the article are meaningless by themselves. Does anybody know how those slight differences in accuracy translate into dollars saved? Furthermore, why can't we piggy-back on the European system? They run world-simulations after all and share the data.

    • by Shinobi ( 19308 )

      The Euro model by itself was more accurate than the multi-model forecast run by NWS, which in turns was more accurate than raw GFS. IIRC, the Euro model predicted the Sandy landfall 320km off, while the NWS multi-model analysis was 1500km off, and raw GFS said it'd not hit land at all, going WAAAAY east.

      The NWS multi-model forecast predicting landfall only came out a few days before it hit, while the euro model predicted it more than a week before. The US Navy multi-model forecast was also ahead of the NWS

      • Thanks, but that really doesn't answer my question about cost/benefit. Even if the European model weren't available and even if an improved model would show such improvements every time, the economic benefit could still be negligible.

        • by Shinobi ( 19308 )

          Improved warning time leads to better preparedness which leads to less costly aftermaths. Well, at least in sane societies

        • They're talking about $25M. While that would more than max out my credit cards, compare it to the $65B that Sandy cost. That's just one storm. So they're proposing to spend 0.04% of the cleanup cost of Sandy on a shinier new computer that hopefully will give them better forecasts. I say it's worth the gamble.
        • OK, here's a hard benefit: imagine how much money it costs a company like Citibank to close offices for a day or more in anticipation of an upcoming storm. It's staggering. If it allows a company like that to make better decisions about which offices and branches are unquestionably going to have to be closed, and which ones might be able to safely remain open, the hard dollar value would be measured in millions. Ditto for concert venues, sporting events, tourist destinations (hello, Disney? Myrtle Beach?),

          • You're stating the obvious. But we're not talking about an all-or-nothing thing here, we're talking about a small improvement to a generally unreliable prediction.

            By analogy, I we could build a personal scale that measures my weight precisely down to the milligram, provably beating all the other personal scales on the market. But that would be a waste of money because such precise measurements just aren't useful for most uses of personal scales.

            • By analogy, I we could build a personal scale that measures my weight precisely down to the milligram, provably beating all the other personal scales on the market. But that would be a waste of money because such precise measurements just aren't useful for most uses of personal scales.

              Your analogy doesn't stand up. You're talking about taking a system that works quite well (typical bathroom scale) and needlessly refining it. Storm forecasting is, as you point out, not nearly as accurate as we would like or could use. Therefore it's worth trying to improve it. I don't think anyone can say exactly how much a new computer will help it, as there is some research. However, since Sandy alone cost 2600x as much as this new computer, it seems like it's worth trying.

  • how long it will be before TWC tries to name it?
  • It would be nice if they'd also do something about the remote sensing infrastructure to get more data to these nice new supercomputers. My current understanding is that the Feds are getting increasingly weak in that department.

It is easier to write an incorrect program than understand a correct one.

Working...