You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now, the Vulkan versions can submit arbitrarily large command buffers to the GPU, because it just packs all simulation steps to be processed into a single command buffer.
However, it has been observed that the NVidia Linux driver encounters performance issues when processing very large command buffers, and that the AMD Linux driver can encounter correctness issues in the same scenario. Furthermore, too big a command buffer could potentially result in a desktop freeze or a system GPU watchdog kill, depending on if GPU preemption is implemented between command buffers or individual dispatches. For all these reasons, it would be a good idea to limit the size of command buffers to a certain amount of commands, and emit multiple command buffers after this limit is reached.
This will require modifying the abstract simulation trait to use impl GpuFuture instead of a concrete type in a strategic location.
The text was updated successfully, but these errors were encountered:
Right now, the Vulkan versions can submit arbitrarily large command buffers to the GPU, because it just packs all simulation steps to be processed into a single command buffer.
However, it has been observed that the NVidia Linux driver encounters performance issues when processing very large command buffers, and that the AMD Linux driver can encounter correctness issues in the same scenario. Furthermore, too big a command buffer could potentially result in a desktop freeze or a system GPU watchdog kill, depending on if GPU preemption is implemented between command buffers or individual dispatches. For all these reasons, it would be a good idea to limit the size of command buffers to a certain amount of commands, and emit multiple command buffers after this limit is reached.
This will require modifying the abstract simulation trait to use impl GpuFuture instead of a concrete type in a strategic location.
The text was updated successfully, but these errors were encountered: