-
Notifications
You must be signed in to change notification settings - Fork 13
Benchmarking the code in time
deitrr edited this page Mar 4, 2022
·
4 revisions
(In progress)
To ensure that developments to the code do not cause the whole thing to regress, I describe some of the benchmarking procedures that can and should be used.
- Every six months or so, run the THORtestrunner.py with option "-t repo_benchmarks". This will run a set of simulations that have been tested and published before. It takes a few days to run on a single GPU, depending on the GPU.
- Then run mjolnir/plot_test_fn.py, which will generate a bunch of plots based on these simulations, while also testing the plotting and regridding codes in mjolnir. Be sure to update the variable
month_tag
with the month and year of the simulations, and commit the figure outputs to the THOR repository. - Compare those figures to the previous sets in repo_benchmark_figures!
- (run some tests with bincomp and THORtestrunner.py to make sure that binary compatibility hasn't been broken. include some halt/restart cases to check variables get read in properly)