Is it the resolution or the model itself?
Increased resolution is not necessarily the answer.
Uncle Sam's weather forecasters are going to use graphics processors to hopefully better predict the course of catastrophic storms. The US government's NOAA agency will build a new computer system boosted by 760 Nvidia Pascal GPUs, according to Ian Buck, the chip company's veep of accelerated computing and CUDA inventor. When …
Quite. While I have no experience of climate modelling directly, I spent many years working on problems in numerical analysis, generally related to fluids. On many (not a few) occasions I saw groups led astray by concentrating on improving resolution through both code optimisation and improved machine performance. While these can help, throwing more resources at the wrong (sub) questions can result in a lot of wasted time.
Of course the US group might just have been unlucky with some of their initial conditions, rather than necessarily worse than the European team.
Back in October 2012, NOAA got the path of Hurricane Sandy wrong – its GFS model reckoned Sandy would swerve away from the US – whereas the European Centre for Medium-Range Weather Forecasts (ECMWF) got it right, calculating America would take a direct hit. The superstorm killed 233 people and caused $75bn of damage as it barreled into the Caribbean, Bermuda, and the east coast of the United States.
And yet three years later there are still hundreds of people homeless, and thousands with horribly damaged homes, while they get screwed over by insurance companies an the U.S. government.
>and yet three years later
And what does that have to do with NOAA, the models used, and the creation of a faster / better machine?
Are you somehow hoping that the NOAA model should have been the correct one, and that Nature follows what the model you think should dictate?
Not so long ago, whilst investigating why 64 bit linux systems were giving wildly different (wrong) answers to various astrophysics problems, a researcher I work with discovered that the 32 bit systems were giving wrong answers too - and unfortunately they were treated as canonical.
The cause was found to be rounding errors, or more precisely, cumulative rounding errors.
Taking the output of IEEE FP calculations and using those as input for more calculations will give different results to doing the entire run in one go, depending on the level of precision you pull out as the answer in that intermediate stage.
The results of that discovery are still (quietly) bouncing across the astrophysics community. It will probably take a while before the full importance is realised.
When it comes to butterflies affecting hurricanes, the same kind of problem occur. Tiny peturbations in the input can result in huge errors in the output.
GFDL is a national weather service model. Why they ignored it and went with the others that showed it moving out to sea, who knows?
It seems like the problem wasn't that the NWS needs more computing power, just that they had overly stubborn misplaced faith in another model despite two showing an east coast landfall. Anytime you use multiple models, no matter how much resolution they have, you will get different answers when looking days in advance. The problem is, if they warn every time even one model shows landfall, there will be a lot of false alarms and people will quit believing them.
"GFDL is a national weather service model."
No it isn't. GFDL is the Geophysical Fluid Dynamics Laboratory. It is a part of the NOAA focussed on basic research, it doesn't provide actual forecasts. The GFS is the Global Forecast System, administered by the NCEI (National Centres for Environmental Information), which is the branch that actually does active forecasting. From the GFDL's factsheet:
"GFDL supports the National Weather Service'sefforts to produce a state-of-the-art weather prediction system. GFDL’s Finite Volume dynamical core (FV3) is among contenders from across several Federal agencies to upgrade the dynamical core used in current operational weather forecasts. FV3 provides superior representation of rotating flows that characterize significant weather events, such as strong winter storms, tropical cyclones, super-cell thunderstorms, and tornadoes."
In other words, GFDL attempts to produce the best model possible, which the NCEI then uses to actually produce forecasts. The data shown in the article compares the model currently in use with a new one being developed but not yet in use. (And also the predictions of the Hurricane Weather Research and Forecast system, from yet another part of NOAA and presumably using another different model).
It's also worth bearing in mind that while the GFDL prediction was better for the later part of the storm path, it was actually worse than all the others on the early part. In this particular case the later part turned out to be more important for the USA, it's not an obviously better model just from looking at the result shown here. If you had to make a decision on which model to believe based only on the first half of the storm's path, the green line would not be the obvious choice.
I'm all for exploring different avenues in science. Many things have been discovered by making a mistake, or doing the "wrong" thing.
But if the model used by NOAA is really flawed (and one wrong result does not necessarily mean it is), then adding more precision and computing power might not be the answer.
I think we might need to treat this as a flight computer issue. Have three centers compute an estimate using three different model, and use the two that best agree to determine the final forecast. The models used must, of course, have a record of accuracy to be taken into account.
Such an approach might not have helped here if the wrong models had been chosen (beware of Not-Invented-Here syndrome), but over time, such an approach could only help.
"But if the model used by NOAA is really flawed (and one wrong result does not necessarily mean it is), then adding more precision and computing power might not be the answer."
The problem is that there are actually multiple problems. The model is surely flawed and there is plenty of work being done to come up with better models. But at the same time, we know that no matter what model is used and how good it is, better resolution will improve the results. So it's not simply a case of throwing more computing power at it and hoping to solve everything, but rather that throwing more computing power at it will definitely help with one of the problems, making it that much easier to work on all the other problems.
About two weeks before a tropical storm an earthquake or a series of them occurs in order to break the current flow in the sea. That occurs to supply heat to build the singularity.
The path of the storm will take it through the epicentre of the earlier 'quake and this is where cyclosis begins. Now see if some of that funding can be diverted to prove the research. The work has already been done it just wants controls and cross checks.
Changes in track from straight lines come with perturbations in a gyroscopic effect so keenly demonstrated by Professor Eric Laithwaite. It has nothing to do with so called Coriolis Effect