You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi there,
Thank you for helping us find a bug. This error occurred in some versions of pytorch, which we did not encounter in the initial code test. Now that we have fixed it in our latest code, please use our updated repository and try again!
To get better accuracy, we have some suggestions:
Make sure learning rate follows the linear scaling rule with total batch_size. If total_batch_size increases k times, learning rate should be increased k times. We set learning_rate=1e-4 for batch_size=5 and 2 GPUs (total_batch_size=5 * 2=10). If you use batch_size=2 and 1 GPU (total_batch_size=2), for example, you should change learning rate to 2e-5.
If you just want better performance, instead of making a fair comparison with other methods in a paper. You can use a stronger data_augmentation. Set transform in configs/train_config.py to presets.strong_album will increase accuracy.
And remember to use larger backbones and load pretrained weight. We will update more pretrained weight in the next few weeks.
Question
Hi @xiuqhou ,I wanted to train Relation_detr on my custom coco_dataset but getting error.
how can i resolve this?
Additional
Also please suggest me some fine tunning strategy that should i follow for more accuracy
No response
The text was updated successfully, but these errors were encountered: