You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the original Convex Hull benchmarks that I added (#51 and fixed in #55), I made a mistake where my number of points was a nice power of 10, which is unfortunately not a multiple of 3, and produced an error that later would stop the run of the intended benchmark.
Confusingly, the numbers still made sense somewhat, so it wasn't immediately obvious that the benchmark was bogus.
Also, since the benchmark "game" window starts fullscreen, it covers the usual debugger tab that would be red and maybe get your attention.
Mistakes like the described above might possibly be catched when adding the benchmark / code review. But it would be nice to detect script errors when running the benchmark: maybe a future (unreleased hopefully) version of the engine mistakenly breaks some feature, and currently the only way to spot it looking at the results would be if the time noticeably changes (likely).
So, what would be a good way of detecting script errors when running a benchmark?
We can add a boolean like ran_ok in the JSON, and another column with "OK" or "ERR" on the GUI.
The text was updated successfully, but these errors were encountered:
In the original Convex Hull benchmarks that I added (#51 and fixed in #55), I made a mistake where my number of points was a nice power of 10, which is unfortunately not a multiple of 3, and produced an error that later would stop the run of the intended benchmark.
Confusingly, the numbers still made sense somewhat, so it wasn't immediately obvious that the benchmark was bogus.
Also, since the benchmark "game" window starts fullscreen, it covers the usual debugger tab that would be red and maybe get your attention.
Mistakes like the described above might possibly be catched when adding the benchmark / code review. But it would be nice to detect script errors when running the benchmark: maybe a future (unreleased hopefully) version of the engine mistakenly breaks some feature, and currently the only way to spot it looking at the results would be if the time noticeably changes (likely).
So, what would be a good way of detecting script errors when running a benchmark?
We can add a boolean like
ran_ok
in the JSON, and another column with "OK" or "ERR" on the GUI.The text was updated successfully, but these errors were encountered: