Skip to content

Conversation

@jonasbhend
Copy link
Contributor

@jonasbhend jonasbhend commented Jan 7, 2026

Add maps of forecast verification scores

Changes

  • score components for maps in verif.nc
  • Make verif.nc temporary to avoid storage of large data volumes

@Louis-Frey Louis-Frey force-pushed the MRB-650-Maps-simplified branch from 2185fd6 to 9eb4643 Compare January 22, 2026 12:43
jonasbhend and others added 29 commits January 27, 2026 16:28
summary statistics. (No changes to code yet.)
For Bias, RMSE and MAE map plots.
Francesco. Got a long way towards the png plots.

Co-authored-by: Francesco Zanetta <francesco.zanetta@meteoswiss.ch>
properly working). Output written to .png now
working.
detailed inspection of results at smaller spatial
scale.
_compute_statistics with argument dim.
@Louis-Frey Louis-Frey force-pushed the MRB-650-Maps-simplified branch from 52fce5c to be6fa35 Compare January 27, 2026 17:14
symmetric about zero. vmin and vmax values for
variables other than T_2M yet to be defined.
This way, everything is in the same place and can
be changed there when needed. Accompanying changes
in the plotting script (marimo cell that gets the
colour map defaults).
Required accompanying change in Snakefile.
This has caused problems before.
@Louis-Frey
Copy link

I came across two different problems with this feature branch:

  1. On the current last commit, I run into memory issues in rule verif_metrics_aggregation . The job seems to be killed by SLURM due to be out of memory (analysis of the log files and consultation with ChatGPT strongly suggest this). This may confirm Francesco's reservations regarding the computation of the spatial metrics.

  2. When I extended the code to also plot the maps for the baselines, I got errors in rule report_experiment_dashboard several times last week. This made me test the code on the last commit today, resulting in the error described in 1.

So overall I can't really make sense of this. On the one hand, I apparently was able to run verif_metrics_aggregation before, because the code only failed later in report_experiment_dashboard. Also, I was able to run the full pipeline on a full year of daily forecasts with the ICON-CH1 emulator about a week ago, which ran without an error.

Also, the problem in 2. suggests that even if problem 1. does not occur, problems may arise later when the verification files are aggregated in the dashboard. This may be due to the large size of the verification files (about 17G for the run and about 11G for the ICON-CH1 baseline). However, also this ran without error in some cases (1 year experiment), so it does not seem to fail consistently.

Would be great if you could look into this!

@Louis-Frey Louis-Frey requested review from dnerini and frazane February 9, 2026 13:41
@frazane
Copy link
Contributor

frazane commented Feb 9, 2026

Make verif.nc temporary to avoid storage of large data volumes

By doing this, verification will have to be re-computed for every run, every time the workflow is executed. I don't think we want this, no?

@Louis-Frey
Copy link

Make verif.nc temporary to avoid storage of large data volumes

By doing this, verification will have to be re-computed for every run, every time the workflow is executed. I don't think we want this, no?

Yes, that could be problematic. Jonas introduced it and I haven't given it much thought, should I change it back?

@Louis-Frey
Copy link

Ok I most likely fixed problem 2. See commit 7b50809

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants