-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sensible way to flag model instability? #109
Comments
This is a great topic for discussion! CC @yaniyuval who is looking into this. The short answer is that we are still trying to nail down stability for NeuralGCM models, so we can fix this in future versions. We have also noticed that some initial conditions can be much more unstable than others. In particular, it seems that instability is often evident first from a drift in mean surface pressure. |
Like Stephan wrote, we are still working on instability issues. |
Interesting. Thanks for your quick replies. I looked at your paper figures and some of my cases. I agree that the instability often appears to originate from the near-surface. The most common mode of failure appears to be:
The good side of the story is that such propagating behaviors are consistent with physics. It also indicates the model may capture processes related to the stratosphere-troposphere coupling, which is a weakness for many physical models and is useful for long-range predictions. Whether NeuralGCM does better remains to be determined. For the instability, my guesses are: 1) smoothing the topography and tweaking the settings of lower boundaries probably can help; 2) if the dynamical core model has hyperdiffusion parameters, tuning it may help dampen some initial fine-scale noises. Edit: the stratosphere (0-150 hPa) also has early signs of instability. Looking forward to the fix that you two mentioned. |
Does the team have a sensible way to detect and flag unstable simulations?
Context: I finished some AMIP runs forced by modified boundary conditions. For each start date, I have twenty ensemble members generated with different random seeding. While simulations are stable for most start dates, a few combinations of initial conditions and boundary forcings are so unstable that most runs fail.
The plot compares the standard deviations along the longitude dimension at the start and the end dates. The ratios are in the [latitude, ensemble member] space. Any ratios beyond 5.0 are likely beyond the normal range of seasonality and indicate some model instability. Conventional physical models usually flag instability and terminate simulations early.
The plot is generated using the following code:
For reference, the same analysis of a good date looks like this:
The text was updated successfully, but these errors were encountered: