You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I see that it is possible to do training with fp16 precision by adding something like fp16 = dict(loss_scale=512) in the config file (I haven't actually tried).
I would like to do inference at fp16 precision. I'm currently doing inference using init_detector and inference_detector from the inference API. Is there a config parameter to use to run inference at fp16 precision? (to speed up inference?)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi, I see that it is possible to do training with fp16 precision by adding something like
fp16 = dict(loss_scale=512)
in the config file (I haven't actually tried).I would like to do inference at fp16 precision. I'm currently doing inference using
init_detector
andinference_detector
from the inference API. Is there a config parameter to use to run inference at fp16 precision? (to speed up inference?)Beta Was this translation helpful? Give feedback.
All reactions